Cortical processing of speech in individuals with auditory neuropathy spectrum disorder.
Apeksha, Kumari; Kumar, U Ajith
2018-06-01
Auditory neuropathy spectrum disorder (ANSD) is a condition where cochlear amplification function (involving outer hair cells) is normal but neural conduction in the auditory pathway is disordered. This study was done to investigate the cortical representation of speech in individuals with ANSD and to compare it with the individuals with normal hearing. Forty-five participants including 21 individuals with ANSD and 24 individuals with normal hearing were considered for the study. Individuals with ANSD had hearing thresholds ranging from normal hearing to moderate hearing loss. Auditory cortical evoked potentials-through odd ball paradigm-were recorded using 64 electrodes placed on the scalp for /ba/-/da/ stimulus. Onset cortical responses were also recorded in repetitive paradigm using /da/ stimuli. Sensitivity and reaction time required to identify the oddball stimuli were also obtained. Behavioural results indicated that individuals in ANSD group had significantly lower sensitivity and longer reaction times compared to individuals with normal hearing sensitivity. Reliable P300 could be elicited in both the groups. However, a significant difference in scalp topographies was observed between the two groups in both repetitive and oddball paradigms. Source localization using local auto regressive analyses revealed that activations were more diffuses in individuals with ANSD when compared to individuals with normal hearing sensitivity. Results indicated that the brain networks and regions activated in individuals with ANSD during detection and discrimination of speech sounds are different from normal hearing individuals. In general, normal hearing individuals showed more focused activations while in individuals with ANSD activations were diffused.
Effects of Aging and Adult-Onset Hearing Loss on Cortical Auditory Regions
Cardin, Velia
2016-01-01
Hearing loss is a common feature in human aging. It has been argued that dysfunctions in central processing are important contributing factors to hearing loss during older age. Aging also has well documented consequences for neural structure and function, but it is not clear how these effects interact with those that arise as a consequence of hearing loss. This paper reviews the effects of aging and adult-onset hearing loss in the structure and function of cortical auditory regions. The evidence reviewed suggests that aging and hearing loss result in atrophy of cortical auditory regions and stronger engagement of networks involved in the detection of salient events, adaptive control and re-allocation of attention. These cortical mechanisms are engaged during listening in effortful conditions in normal hearing individuals. Therefore, as a consequence of aging and hearing loss, all listening becomes effortful and cognitive load is constantly high, reducing the amount of available cognitive resources. This constant effortful listening and reduced cognitive spare capacity could be what accelerates cognitive decline in older adults with hearing loss. PMID:27242405
Hearing Loss Severity: Impaired Processing of Formant Transition Duration
ERIC Educational Resources Information Center
Coez, A.; Belin, P.; Bizaguet, E.; Ferrary, E.; Zilbovicius, M.; Samson, Y.
2010-01-01
Normal hearing listeners exploit the formant transition (FT) detection to identify place of articulation for stop consonants. Neuro-imaging studies revealed that short FT induced less cortical activation than long FT. To determine the ability of hearing impaired listeners to distinguish short and long formant transitions (FT) from vowels of the…
Dai, Lengshi; Best, Virginia; Shinn-Cunningham, Barbara G.
2018-01-01
Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing. PMID:29555752
Fujiwara, Keizo; Naito, Yasushi; Senda, Michio; Mori, Toshiko; Manabe, Tomoko; Shinohara, Shogo; Kikuchi, Masahiro; Hori, Shin-Ya; Tona, Yosuke; Yamazaki, Hiroshi
2008-04-01
The use of fluorodeoxyglucose positron emission tomography (FDG-PET) with a visual language task provided objective information on the development and plasticity of cortical language networks. This approach could help individuals involved in the habilitation and education of prelingually deafened children to decide upon the appropriate mode of communication. To investigate the cortical processing of the visual component of language and the effect of deafness upon this activity. Six prelingually deafened children participated in this study. The subjects were numbered 1-6 in the order of their spoken communication skills. In the time period between an intravenous injection of 370 MBq 18F-FDG and PET scanning of the brain, each subject was instructed to watch a video of the face of a speaking person. The cortical radioactivity of each deaf child was compared with that of a group of normal- hearing adults using a t test in a basic SPM2 model. The widest bilaterally activated cortical area was detected in subject 1, who was the worst user of spoken language. By contrast, there was no significant difference between subject 6, who was the best user of spoken language with a hearing aid, and the normal hearing group.
Butler, Blake E; Chabot, Nicole; Kral, Andrej; Lomber, Stephen G
2017-01-01
Crossmodal plasticity takes place following sensory loss, such that areas that normally process the missing modality are reorganized to provide compensatory function in the remaining sensory systems. For example, congenitally deaf cats outperform normal hearing animals on localization of visual stimuli presented in the periphery, and this advantage has been shown to be mediated by the posterior auditory field (PAF). In order to determine the nature of the anatomical differences that underlie this phenomenon, we injected a retrograde tracer into PAF of congenitally deaf animals and quantified the thalamic and cortical projections to this field. The pattern of projections from areas throughout the brain was determined to be qualitatively similar to that previously demonstrated in normal hearing animals, but with twice as many projections arising from non-auditory cortical areas. In addition, small ectopic projections were observed from a number of fields in visual cortex, including areas 19, 20a, 20b, and 21b, and area 7 of parietal cortex. These areas did not show projections to PAF in cats deafened ototoxically near the onset of hearing, and provide a possible mechanism for crossmodal reorganization of PAF. These, along with the possible contributions of other mechanisms, are considered. Copyright © 2016 Elsevier B.V. All rights reserved.
Cortical Auditory Evoked Potentials in (Un)aided Normal-Hearing and Hearing-Impaired Adults
Van Dun, Bram; Kania, Anna; Dillon, Harvey
2016-01-01
Cortical auditory evoked potentials (CAEPs) are influenced by the characteristics of the stimulus, including level and hearing aid gain. Previous studies have measured CAEPs aided and unaided in individuals with normal hearing. There is a significant difference between providing amplification to a person with normal hearing and a person with hearing loss. This study investigated this difference and the effects of stimulus signal-to-noise ratio (SNR) and audibility on the CAEP amplitude in a population with hearing loss. Twelve normal-hearing participants and 12 participants with a hearing loss participated in this study. Three speech sounds—/m/, /g/, and /t/—were presented in the free field. Unaided stimuli were presented at 55, 65, and 75 dB sound pressure level (SPL) and aided stimuli at 55 dB SPL with three different gains in steps of 10 dB. CAEPs were recorded and their amplitudes analyzed. Stimulus SNRs and audibility were determined. No significant effect of stimulus level or hearing aid gain was found in normal hearers. Conversely, a significant effect was found in hearing-impaired individuals. Audibility of the signal, which in some cases is determined by the signal level relative to threshold and in other cases by the SNR, is the dominant factor explaining changes in CAEP amplitude. CAEPs can potentially be used to assess the effects of hearing aid gain in hearing-impaired users. PMID:27587919
Eggermont, Jos J
2017-09-01
It is known that hearing loss induces plastic changes in the brain, causing loudness recruitment and hyperacusis, increased spontaneous firing rates and neural synchrony, reorganizations of the cortical tonotopic maps, and tinnitus. Much less in known about the central effects of exposure to sounds that cause a temporary hearing loss, affect the ribbon synapses in the inner hair cells, and cause a loss of high-threshold auditory nerve fibers. In contrast there is a wealth of information about central effects of long-duration sound exposures at levels ≤80 dB SPL that do not even cause a temporary hearing loss. The central effects for these moderate level exposures described in this review include changes in central gain, increased spontaneous firing rates and neural synchrony, and reorganization of the cortical tonotopic map. A putative mechanism is outlined, and the effect of the acoustic environment during the recovery process is illustrated. Parallels are drawn with hearing problems in humans with long-duration exposures to occupational noise but with clinical normal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.
Monaural Congenital Deafness Affects Aural Dominance and Degrades Binaural Processing
Tillein, Jochen; Hubka, Peter; Kral, Andrej
2016-01-01
Cortical development extensively depends on sensory experience. Effects of congenital monaural and binaural deafness on cortical aural dominance and representation of binaural cues were investigated in the present study. We used an animal model that precisely mimics the clinical scenario of unilateral cochlear implantation in an individual with single-sided congenital deafness. Multiunit responses in cortical field A1 to cochlear implant stimulation were studied in normal-hearing cats, bilaterally congenitally deaf cats (CDCs), and unilaterally deaf cats (uCDCs). Binaural deafness reduced cortical responsiveness and decreased response thresholds and dynamic range. In contrast to CDCs, in uCDCs, cortical responsiveness was not reduced, but hemispheric-specific reorganization of aural dominance and binaural interactions were observed. Deafness led to a substantial drop in binaural facilitation in CDCs and uCDCs, demonstrating the inevitable role of experience for a binaural benefit. Sensitivity to interaural time differences was more reduced in uCDCs than in CDCs, particularly at the hemisphere ipsilateral to the hearing ear. Compared with binaural deafness, unilateral hearing prevented nonspecific reduction in cortical responsiveness, but extensively reorganized aural dominance and binaural responses. The deaf ear remained coupled with the cortex in uCDCs, demonstrating a significant difference to deprivation amblyopia in the visual system. PMID:26803166
Monaural Congenital Deafness Affects Aural Dominance and Degrades Binaural Processing.
Tillein, Jochen; Hubka, Peter; Kral, Andrej
2016-04-01
Cortical development extensively depends on sensory experience. Effects of congenital monaural and binaural deafness on cortical aural dominance and representation of binaural cues were investigated in the present study. We used an animal model that precisely mimics the clinical scenario of unilateral cochlear implantation in an individual with single-sided congenital deafness. Multiunit responses in cortical field A1 to cochlear implant stimulation were studied in normal-hearing cats, bilaterally congenitally deaf cats (CDCs), and unilaterally deaf cats (uCDCs). Binaural deafness reduced cortical responsiveness and decreased response thresholds and dynamic range. In contrast to CDCs, in uCDCs, cortical responsiveness was not reduced, but hemispheric-specific reorganization of aural dominance and binaural interactions were observed. Deafness led to a substantial drop in binaural facilitation in CDCs and uCDCs, demonstrating the inevitable role of experience for a binaural benefit. Sensitivity to interaural time differences was more reduced in uCDCs than in CDCs, particularly at the hemisphere ipsilateral to the hearing ear. Compared with binaural deafness, unilateral hearing prevented nonspecific reduction in cortical responsiveness, but extensively reorganized aural dominance and binaural responses. The deaf ear remained coupled with the cortex in uCDCs, demonstrating a significant difference to deprivation amblyopia in the visual system. © The Author 2016. Published by Oxford University Press.
Discrimination Task Reveals Differences in Neural Bases of Tinnitus and Hearing Impairment
Husain, Fatima T.; Pajor, Nathan M.; Smith, Jason F.; Kim, H. Jeff; Rudy, Susan; Zalewski, Christopher; Brewer, Carmen; Horwitz, Barry
2011-01-01
We investigated auditory perception and cognitive processing in individuals with chronic tinnitus or hearing loss using functional magnetic resonance imaging (fMRI). Our participants belonged to one of three groups: bilateral hearing loss and tinnitus (TIN), bilateral hearing loss without tinnitus (HL), and normal hearing without tinnitus (NH). We employed pure tones and frequency-modulated sweeps as stimuli in two tasks: passive listening and active discrimination. All subjects had normal hearing through 2 kHz and all stimuli were low-pass filtered at 2 kHz so that all participants could hear them equally well. Performance was similar among all three groups for the discrimination task. In all participants, a distributed set of brain regions including the primary and non-primary auditory cortices showed greater response for both tasks compared to rest. Comparing the groups directly, we found decreased activation in the parietal and frontal lobes in the participants with tinnitus compared to the HL group and decreased response in the frontal lobes relative to the NH group. Additionally, the HL subjects exhibited increased response in the anterior cingulate relative to the NH group. Our results suggest that a differential engagement of a putative auditory attention and short-term memory network, comprising regions in the frontal, parietal and temporal cortices and the anterior cingulate, may represent a key difference in the neural bases of chronic tinnitus accompanied by hearing loss relative to hearing loss alone. PMID:22066003
Limb, Charles J; Molloy, Anne T; Jiradejvong, Patpong; Braun, Allen R
2010-03-01
Despite the significant advances in language perception for cochlear implant (CI) recipients, music perception continues to be a major challenge for implant-mediated listening. Our understanding of the neural mechanisms that underlie successful implant listening remains limited. To our knowledge, this study represents the first neuroimaging investigation of music perception in CI users, with the hypothesis that CI subjects would demonstrate greater auditory cortical activation than normal hearing controls. H(2) (15)O positron emission tomography (PET) was used here to assess auditory cortical activation patterns in ten postlingually deafened CI patients and ten normal hearing control subjects. Subjects were presented with language, melody, and rhythm tasks during scanning. Our results show significant auditory cortical activation in implant subjects in comparison to control subjects for language, melody, and rhythm. The greatest activity in CI users compared to controls was seen for language tasks, which is thought to reflect both implant and neural specializations for language processing. For musical stimuli, PET scanning revealed significantly greater activation during rhythm perception in CI subjects (compared to control subjects), and the least activation during melody perception, which was the most difficult task for CI users. These results may suggest a possible relationship between auditory performance and degree of auditory cortical activation in implant recipients that deserves further study.
Cortical auditory evoked potentials in the assessment of auditory neuropathy: two case studies.
Pearce, Wendy; Golding, Maryanne; Dillon, Harvey
2007-05-01
Infants with auditory neuropathy and possible hearing impairment are being identified at very young ages through the implementation of hearing screening programs. The diagnosis is commonly based on evidence of normal cochlear function but abnormal brainstem function. This lack of normal brainstem function is highly problematic when prescribing amplification in young infants because prescriptive formulae require the input of hearing thresholds that are normally estimated from auditory brainstem responses to tonal stimuli. Without this information, there is great uncertainty surrounding the final fitting. Cortical auditory evoked potentials may, however, still be evident and reliably recorded to speech stimuli presented at conversational levels. The case studies of two infants are presented that demonstrate how these higher order electrophysiological responses may be utilized in the audiological management of some infants with auditory neuropathy.
Durante, Alessandra Spada; Wieselberg, Margarita Bernal; Roque, Nayara; Carvalho, Sheila; Pucci, Beatriz; Gudayol, Nicolly; de Almeida, Kátia
The use of hearing aids by individuals with hearing loss brings a better quality of life. Access to and benefit from these devices may be compromised in patients who present difficulties or limitations in traditional behavioral audiological evaluation, such as newborns and small children, individuals with auditory neuropathy spectrum, autism, and intellectual deficits, and in adults and the elderly with dementia. These populations (or individuals) are unable to undergo a behavioral assessment, and generate a growing demand for objective methods to assess hearing. Cortical auditory evoked potentials have been used for decades to estimate hearing thresholds. Current technological advances have lead to the development of equipment that allows their clinical use, with features that enable greater accuracy, sensitivity, and specificity, and the possibility of automated detection, analysis, and recording of cortical responses. To determine and correlate behavioral auditory thresholds with cortical auditory thresholds obtained from an automated response analysis technique. The study included 52 adults, divided into two groups: 21 adults with moderate to severe hearing loss (study group); and 31 adults with normal hearing (control group). An automated system of detection, analysis, and recording of cortical responses (HEARLab ® ) was used to record the behavioral and cortical thresholds. The subjects remained awake in an acoustically treated environment. Altogether, 150 tone bursts at 500, 1000, 2000, and 4000Hz were presented through insert earphones in descending-ascending intensity. The lowest level at which the subject detected the sound stimulus was defined as the behavioral (hearing) threshold (BT). The lowest level at which a cortical response was observed was defined as the cortical electrophysiological threshold. These two responses were correlated using linear regression. The cortical electrophysiological threshold was, on average, 7.8dB higher than the behavioral for the group with hearing loss and, on average, 14.5dB higher for the group without hearing loss for all studied frequencies. The cortical electrophysiological thresholds obtained with the use of an automated response detection system were highly correlated with behavioral thresholds in the group of individuals with hearing loss. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
ERIC Educational Resources Information Center
Shinn-Cunningham, Barbara
2017-01-01
Purpose: This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. Method: The results from neuroscience and psychoacoustics are reviewed. Results: In noisy settings, listeners focus their…
Marsella, Pasquale; Scorpecci, Alessandro; Vecchiato, Giovanni; Maglione, Anton Giulio; Colosimo, Alfredo; Babiloni, Fabio
2014-05-01
To date, no objective measure of the pleasantness of music perception by children with cochlear implants has been reported. The EEG alpha asymmetries of pre-frontal cortex activation are known to relate to emotional/affective engagement in a perceived stimulus. More specifically, according to the "withdrawal/approach" model, an unbalanced de-synchronization of the alpha activity in the left prefrontal cortex has been associated with a positive affective state/approach toward a stimulus, and an unbalanced de-synchronization of the same activity in the right prefrontal cortex with a negative affective state/withdrawal from a stimulus. In the present study, High-Resolution EEG with Source Reconstruction was used to compare the music-induced alpha asymmetries of the prefrontal cortex in a group of prelingually deaf implanted children and in a control group of normal-hearing children. Six normal-hearing and six age-matched deaf children using a unilateral cochlear implants underwent High-Resolution EEG recordings as they were listening to a musical cartoon. Musical stimuli were delivered in three versions: Normal, Distort (reverse audio flow) and Mute. The EEG alpha rhythm asymmetry was analyzed: Power Spectral Density was calculated for each Region of Interest, together with a right-left imbalance index. A map of cortical activation was then reconstructed on a realistic cortical model. Asymmetries of EEG alpha rhythm in the prefrontal cortices were observed in both groups. In the normal-hearing children, the asymmetries were consistent with the withdrawal/approach model, whereas in cochlear implant users they were not. Moreover, in implanted children a different pattern of alpha asymmetries in extrafrontal cortical areas was noticed as compared to normal-hearing subjects. The peculiar pattern of alpha asymmetries in implanted children's prefrontal cortex in response to musical stimuli suggests an inability by these subjects to discriminate normal from dissonant music and to appreciate the pleasantness of normal music. High-Resolution EEG may prove to be a promising tool for objectively measuring prefrontal cortex alpha asymmetries in child cochlear implant users. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Evolution of crossmodal reorganization of the voice area in cochlear-implanted deaf patients.
Rouger, Julien; Lagleyre, Sébastien; Démonet, Jean-François; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal
2012-08-01
Psychophysical and neuroimaging studies in both animal and human subjects have clearly demonstrated that cortical plasticity following sensory deprivation leads to a brain functional reorganization that favors the spared modalities. In postlingually deaf patients, the use of a cochlear implant (CI) allows a recovery of the auditory function, which will probably counteract the cortical crossmodal reorganization induced by hearing loss. To study the dynamics of such reversed crossmodal plasticity, we designed a longitudinal neuroimaging study involving the follow-up of 10 postlingually deaf adult CI users engaged in a visual speechreading task. While speechreading activates Broca's area in normally hearing subjects (NHS), the activity level elicited in this region in CI patients is abnormally low and increases progressively with post-implantation time. Furthermore, speechreading in CI patients induces abnormal crossmodal activations in right anterior regions of the superior temporal cortex normally devoted to processing human voice stimuli (temporal voice-sensitive areas-TVA). These abnormal activity levels diminish with post-implantation time and tend towards the levels observed in NHS. First, our study revealed that the neuroplasticity after cochlear implantation involves not only auditory but also visual and audiovisual speech processing networks. Second, our results suggest that during deafness, the functional links between cortical regions specialized in face and voice processing are reallocated to support speech-related visual processing through cross-modal reorganization. Such reorganization allows a more efficient audiovisual integration of speech after cochlear implantation. These compensatory sensory strategies are later completed by the progressive restoration of the visuo-audio-motor speech processing loop, including Broca's area. Copyright © 2011 Wiley Periodicals, Inc.
Pollonini, Luca; Olds, Cristen; Abaya, Homer; Bortfeld, Heather; Beauchamp, Michael S; Oghalai, John S
2014-03-01
The primary goal of most cochlear implant procedures is to improve a patient's ability to discriminate speech. To accomplish this, cochlear implants are programmed so as to maximize speech understanding. However, programming a cochlear implant can be an iterative, labor-intensive process that takes place over months. In this study, we sought to determine whether functional near-infrared spectroscopy (fNIRS), a non-invasive neuroimaging method which is safe to use repeatedly and for extended periods of time, can provide an objective measure of whether a subject is hearing normal speech or distorted speech. We used a 140 channel fNIRS system to measure activation within the auditory cortex in 19 normal hearing subjects while they listed to speech with different levels of intelligibility. Custom software was developed to analyze the data and compute topographic maps from the measured changes in oxyhemoglobin and deoxyhemoglobin concentration. Normal speech reliably evoked the strongest responses within the auditory cortex. Distorted speech produced less region-specific cortical activation. Environmental sounds were used as a control, and they produced the least cortical activation. These data collected using fNIRS are consistent with the fMRI literature and thus demonstrate the feasibility of using this technique to objectively detect differences in cortical responses to speech of different intelligibility. Copyright © 2013 Elsevier B.V. All rights reserved.
Neuroanatomical and resting state EEG power correlates of central hearing loss in older adults.
Giroud, Nathalie; Hirsiger, Sarah; Muri, Raphaela; Kegel, Andrea; Dillier, Norbert; Meyer, Martin
2018-01-01
To gain more insight into central hearing loss, we investigated the relationship between cortical thickness and surface area, speech-relevant resting state EEG power, and above-threshold auditory measures in older adults and younger controls. Twenty-three older adults and 13 younger controls were tested with an adaptive auditory test battery to measure not only traditional pure-tone thresholds, but also above individual thresholds of temporal and spectral processing. The participants' speech recognition in noise (SiN) was evaluated, and a T1-weighted MRI image obtained for each participant. We then determined the cortical thickness (CT) and mean cortical surface area (CSA) of auditory and higher speech-relevant regions of interest (ROIs) with FreeSurfer. Further, we obtained resting state EEG from all participants as well as data on the intrinsic theta and gamma power lateralization, the latter in accordance with predictions of the Asymmetric Sampling in Time hypothesis regarding speech processing (Poeppel, Speech Commun 41:245-255, 2003). Methodological steps involved the calculation of age-related differences in behavior, anatomy and EEG power lateralization, followed by multiple regressions with anatomical ROIs as predictors for auditory performance. We then determined anatomical regressors for theta and gamma lateralization, and further constructed all regressions to investigate age as a moderator variable. Behavioral results indicated that older adults performed worse in temporal and spectral auditory tasks, and in SiN, despite having normal peripheral hearing as signaled by the audiogram. These behavioral age-related distinctions were accompanied by lower CT in all ROIs, while CSA was not different between the two age groups. Age modulated the regressions specifically in right auditory areas, where a thicker cortex was associated with better auditory performance in older adults. Moreover, a thicker right supratemporal sulcus predicted more rightward theta lateralization, indicating the functional relevance of the right auditory areas in older adults. The question how age-related cortical thinning and intrinsic EEG architecture relates to central hearing loss has so far not been addressed. Here, we provide the first neuroanatomical and neurofunctional evidence that cortical thinning and lateralization of speech-relevant frequency band power relates to the extent of age-related central hearing loss in older adults. The results are discussed within the current frameworks of speech processing and aging.
Shinn-Cunningham, Barbara
2017-10-17
This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. The results from neuroscience and psychoacoustics are reviewed. In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. http://cred.pubs.asha.org/article.aspx?articleid=2601617.
Xu, Long-Chun; Zhang, Gang; Zou, Yue; Zhang, Min-Feng; Zhang, Dong-Sheng; Ma, Hua; Zhao, Wen-Bo; Zhang, Guang-Yu
2017-10-13
The objective of the study is to provide some implications for rehabilitation of hearing impairment by investigating changes of neural activities of directional brain networks in patients with long-term bilateral hearing loss. Firstly, we implemented neuropsychological tests of 21 subjects (11 patients with long-term bilateral hearing loss, and 10 subjects with normal hearing), and these tests revealed significant differences between the deaf group and the controls. Then we constructed the individual specific virtual brain based on functional magnetic resonance data of participants by utilizing effective connectivity and multivariate regression methods. We exerted the stimulating signal to the primary auditory cortices of the virtual brain and observed the brain region activations. We found that patients with long-term bilateral hearing loss presented weaker brain region activations in the auditory and language networks, but enhanced neural activities in the default mode network as compared with normally hearing subjects. Especially, the right cerebral hemisphere presented more changes than the left. Additionally, weaker neural activities in the primary auditor cortices were also strongly associated with poorer cognitive performance. Finally, causal analysis revealed several interactional circuits among activated brain regions, and these interregional causal interactions implied that abnormal neural activities of the directional brain networks in the deaf patients impacted cognitive function.
Cross-modal plasticity in developmental and age-related hearing loss: Clinical implications.
Glick, Hannah; Sharma, Anu
2017-01-01
This review explores cross-modal cortical plasticity as a result of auditory deprivation in populations with hearing loss across the age spectrum, from development to adulthood. Cross-modal plasticity refers to the phenomenon when deprivation in one sensory modality (e.g. the auditory modality as in deafness or hearing loss) results in the recruitment of cortical resources of the deprived modality by intact sensory modalities (e.g. visual or somatosensory systems). We discuss recruitment of auditory cortical resources for visual and somatosensory processing in deafness and in lesser degrees of hearing loss. We describe developmental cross-modal re-organization in the context of congenital or pre-lingual deafness in childhood and in the context of adult-onset, age-related hearing loss, with a focus on how cross-modal plasticity relates to clinical outcomes. We provide both single-subject and group-level evidence of cross-modal re-organization by the visual and somatosensory systems in bilateral, congenital deafness, single-sided deafness, adults with early-stage, mild-moderate hearing loss, and individual adult and pediatric patients exhibit excellent and average speech perception with hearing aids and cochlear implants. We discuss a framework in which changes in cortical resource allocation secondary to hearing loss results in decreased intra-modal plasticity in auditory cortex, accompanied by increased cross-modal recruitment of auditory cortices by the other sensory systems, and simultaneous compensatory activation of frontal cortices. The frontal cortices, as we will discuss, play an important role in mediating cognitive compensation in hearing loss. Given the wide range of variability in behavioral performance following audiological intervention, changes in cortical plasticity may play a valuable role in the prediction of clinical outcomes following intervention. Further, the development of new technologies and rehabilitation strategies that incorporate brain-based biomarkers may help better serve hearing impaired populations across the lifespan. Copyright © 2016 Elsevier B.V. All rights reserved.
Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration.
Stropahl, Maren; Debener, Stefan
2017-01-01
There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI) users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users ( n = 18), untreated mild to moderately hearing impaired individuals (n = 18) and normal hearing controls ( n = 17). Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the auditory system even at early stages of hearing loss.
Hearing loss in older adults affects neural systems supporting speech comprehension.
Peelle, Jonathan E; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur
2011-08-31
Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment, we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry, demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally, these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task.
Hearing loss in older adults affects neural systems supporting speech comprehension
Peelle, Jonathan E.; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur
2011-01-01
Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging (fMRI) to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry (VBM), demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task. PMID:21880924
2017-01-01
Purpose This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. Method The results from neuroscience and psychoacoustics are reviewed. Results In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with “normal hearing.” Conclusions How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601617 PMID:29049598
Song, Jae-Jin; Lee, Hyo-Jeong; Kang, Hyejin; Lee, Dong Soo; Chang, Sun O; Oh, Seung Ha
2015-03-01
While deafness-induced plasticity has been investigated in the visual and auditory domains, not much is known about language processing in audiovisual multimodal environments for patients with restored hearing via cochlear implant (CI) devices. Here, we examined the effect of agreeing or conflicting visual inputs on auditory processing in deaf patients equipped with degraded artificial hearing. Ten post-lingually deafened CI users with good performance, along with matched control subjects, underwent H 2 (15) O-positron emission tomography scans while carrying out a behavioral task requiring the extraction of speech information from unimodal auditory stimuli, bimodal audiovisual congruent stimuli, and incongruent stimuli. Regardless of congruency, the control subjects demonstrated activation of the auditory and visual sensory cortices, as well as the superior temporal sulcus, the classical multisensory integration area, indicating a bottom-up multisensory processing strategy. Compared to CI users, the control subjects exhibited activation of the right ventral premotor-supramarginal pathway. In contrast, CI users activated primarily the visual cortices more in the congruent audiovisual condition than in the null condition. In addition, compared to controls, CI users displayed an activation focus in the right amygdala for congruent audiovisual stimuli. The most notable difference between the two groups was an activation focus in the left inferior frontal gyrus in CI users confronted with incongruent audiovisual stimuli, suggesting top-down cognitive modulation for audiovisual conflict. Correlation analysis revealed that good speech performance was positively correlated with right amygdala activity for the congruent condition, but negatively correlated with bilateral visual cortices regardless of congruency. Taken together these results suggest that for multimodal inputs, cochlear implant users are more vision-reliant when processing congruent stimuli and are disturbed more by visual distractors when confronted with incongruent audiovisual stimuli. To cope with this multimodal conflict, CI users activate the left inferior frontal gyrus to adopt a top-down cognitive modulation pathway, whereas normal hearing individuals primarily adopt a bottom-up strategy.
Cortical activation patterns correlate with speech understanding after cochlear implantation
Olds, Cristen; Pollonini, Luca; Abaya, Homer; Larky, Jannine; Loy, Megan; Bortfeld, Heather; Beauchamp, Michael S.; Oghalai, John S.
2015-01-01
Objectives Cochlear implants are a standard therapy for deafness, yet the ability of implanted patients to understand speech varies widely. To better understand this variability in outcomes, we used functional near-infrared spectroscopy (fNIRS) to image activity within regions of the auditory cortex and compare the results to behavioral measures of speech perception. Design We studied 32 deaf adults hearing through cochlear implants and 35 normal-hearing controls. We used fNIRS to measure responses within the lateral temporal lobe and the superior temporal gyrus to speech stimuli of varying intelligibility. The speech stimuli included normal speech, channelized speech (vocoded into 20 frequency bands), and scrambled speech (the 20 frequency bands were shuffled in random order). We also used environmental sounds as a control stimulus. Behavioral measures consisted of the Speech Reception Threshold, CNC words, and AzBio Sentence tests measured in quiet. Results Both control and implanted participants with good speech perception exhibited greater cortical activations to natural speech than to unintelligible speech. In contrast, implanted participants with poor speech perception had large, indistinguishable cortical activations to all stimuli. The ratio of cortical activation to normal speech to that of scrambled speech directly correlated with the CNC Words and AzBio Sentences scores. This pattern of cortical activation was not correlated with auditory threshold, age, side of implantation, or time after implantation. Turning off the implant reduced cortical activations in all implanted participants. Conclusions Together, these data indicate that the responses we measured within the lateral temporal lobe and the superior temporal gyrus correlate with behavioral measures of speech perception, demonstrating a neural basis for the variability in speech understanding outcomes after cochlear implantation. PMID:26709749
Hearing with Two Ears: Evidence for Cortical Binaural Interaction during Auditory Processing.
Henkin, Yael; Yaar-Soffer, Yifat; Givon, Lihi; Hildesheimer, Minka
2015-04-01
Integration of information presented to the two ears has been shown to manifest in binaural interaction components (BICs) that occur along the ascending auditory pathways. In humans, BICs have been studied predominantly at the brainstem and thalamocortical levels; however, understanding of higher cortically driven mechanisms of binaural hearing is limited. To explore whether BICs are evident in auditory event-related potentials (AERPs) during the advanced perceptual and postperceptual stages of cortical processing. The AERPs N1, P3, and a late negative component (LNC) were recorded from multiple site electrodes while participants performed an oddball discrimination task that consisted of natural speech syllables (/ka/ vs. /ta/) that differed by place-of-articulation. Participants were instructed to respond to the target stimulus (/ta/) while performing the task in three listening conditions: monaural right, monaural left, and binaural. Fifteen (21-32 yr) young adults (6 females) with normal hearing sensitivity. By subtracting the response to target stimuli elicited in the binaural condition from the sum of responses elicited in the monaural right and left conditions, the BIC waveform was derived and the latencies and amplitudes of the components were measured. The maximal interaction was calculated by dividing BIC amplitude by the summed right and left response amplitudes. In addition, the latencies and amplitudes of the AERPs to target stimuli elicited in the monaural right, monaural left, and binaural listening conditions were measured and subjected to analysis of variance with repeated measures testing the effect of listening condition and laterality. Three consecutive BICs were identified at a mean latency of 129, 406, and 554 msec, and were labeled N1-BIC, P3-BIC, and LNC-BIC, respectively. Maximal interaction increased significantly with progression of auditory processing from perceptual to postperceptual stages and amounted to 51%, 55%, and 75% of the sum of monaural responses for N1-BIC, P3-BIC, and LNC-BIC, respectively. Binaural interaction manifested in a decrease of the binaural response compared to the sum of monaural responses. Furthermore, listening condition affected P3 latency only, whereas laterality effects manifested in enhanced N1 amplitudes at the left (T3) vs. right (T4) scalp electrode and in a greater left-right amplitude difference in the right compared to left listening condition. The current AERP data provides evidence for the occurrence of cortical BICs during perceptual and postperceptual stages, presumably reflecting ongoing integration of information presented to the two ears at the final stages of auditory processing. Increasing binaural interaction with the progression of the auditory processing sequence (N1 to LNC) may support the notion that cortical BICs reflect inherited interactions from preceding stages of upstream processing together with discrete cortical neural activity involved in binaural processing. Clinically, an objective measure of cortical binaural processing has the potential of becoming an appealing neural correlate of binaural behavioral performance. American Academy of Audiology.
Cardon, Garrett; Campbell, Julia; Sharma, Anu
2013-01-01
The developing auditory cortex is highly plastic. As such, the cortex is both primed to mature normally and at risk for re-organizing abnormally, depending upon numerous factors that determine central maturation. From a clinical perspective, at least two major components of development can be manipulated: 1) input to the cortex and 2) the timing of cortical input. Children with sensorineural hearing loss (SNHL) and auditory neuropathy spectrum disorder (ANSD) have provided a model of early deprivation of sensory input to the cortex, and demonstrated the resulting plasticity and development that can occur upon introduction of stimulation. In this article, we review several fundamental principles of cortical development and plasticity and discuss the clinical applications in children with SNHL and ANSD who receive intervention with hearing aids and/or cochlear implants. PMID:22668761
Giraud, Anne Lise; Truy, Eric
2002-01-01
Early visual cortex can be recruited by meaningful sounds in the absence of visual information. This occurs in particular in cochlear implant (CI) patients whose dependency on visual cues in speech comprehension is increased. Such cross-modal interaction mirrors the response of early auditory cortex to mouth movements (speech reading) and may reflect the natural expectancy of the visual counterpart of sounds, lip movements. Here we pursue the hypothesis that visual activations occur specifically in response to meaningful sounds. We performed PET in both CI patients and controls, while subjects listened either to their native language or to a completely unknown language. A recruitment of early visual cortex, the left posterior inferior temporal gyrus (ITG) and the left superior parietal cortex was observed in both groups. While no further activation occurred in the group of normal-hearing subjects, CI patients additionally recruited the right perirhinal/fusiform and mid-fusiform, the right temporo-occipito-parietal (TOP) junction and the left inferior prefrontal cortex (LIPF, Broca's area). This study confirms a participation of visual cortical areas in semantic processing of speech sounds. Observation of early visual activation in normal-hearing subjects shows that auditory-to-visual cross-modal effects can also be recruited under natural hearing conditions. In cochlear implant patients, speech activates the mid-fusiform gyrus in the vicinity of the so-called face area. This suggests that specific cross-modal interaction involving advanced stages in the visual processing hierarchy develops after cochlear implantation and may be the correlate of increased usage of lip-reading.
Direct recordings from the auditory cortex in a cochlear implant user.
Nourski, Kirill V; Etler, Christine P; Brugge, John F; Oya, Hiroyuki; Kawasaki, Hiroto; Reale, Richard A; Abbas, Paul J; Brown, Carolyn J; Howard, Matthew A
2013-06-01
Electrical stimulation of the auditory nerve with a cochlear implant (CI) is the method of choice for treatment of severe-to-profound hearing loss. Understanding how the human auditory cortex responds to CI stimulation is important for advances in stimulation paradigms and rehabilitation strategies. In this study, auditory cortical responses to CI stimulation were recorded intracranially in a neurosurgical patient to examine directly the functional organization of the auditory cortex and compare the findings with those obtained in normal-hearing subjects. The subject was a bilateral CI user with a 20-year history of deafness and refractory epilepsy. As part of the epilepsy treatment, a subdural grid electrode was implanted over the left temporal lobe. Pure tones, click trains, sinusoidal amplitude-modulated noise, and speech were presented via the auxiliary input of the right CI speech processor. Additional experiments were conducted with bilateral CI stimulation. Auditory event-related changes in cortical activity, characterized by the averaged evoked potential and event-related band power, were localized to posterolateral superior temporal gyrus. Responses were stable across recording sessions and were abolished under general anesthesia. Response latency decreased and magnitude increased with increasing stimulus level. More apical intracochlear stimulation yielded the largest responses. Cortical evoked potentials were phase-locked to the temporal modulations of periodic stimuli and speech utterances. Bilateral electrical stimulation resulted in minimal artifact contamination. This study demonstrates the feasibility of intracranial electrophysiological recordings of responses to CI stimulation in a human subject, shows that cortical response properties may be similar to those obtained in normal-hearing individuals, and provides a basis for future comparisons with extracranial recordings.
Sensory coding and cognitive processing of sound in Veterans with blast exposure
Bressler, Scott; Goldberg, Hannah; Shinn-Cunningham, Barbara
2017-01-01
Recent anecdotal reports from VA audiology clinics as well as a few published studies have identified a sub-population of Service Members seeking treatment for problems communicating in everyday, noisy listening environments despite having normal to near-normal hearing thresholds. Because of their increased risk of exposure to dangerous levels of prolonged noise and transient explosive blast events, communication problems in these soldiers could be due to either hearing loss (traditional or “hidden”) in the auditory sensory periphery or from blast-induced injury to cortical networks associated with attention. We found that out of the 14 blast-exposed Service Members recruited for this study, 12 had hearing thresholds in the normal to near-normal range. A majority of these participants reported having problems specifically related to failures with selective attention. Envelope following responses (EFRs) measuring neural coding fidelity of the auditory brainstem to suprathreshold sounds were similar between blast-exposed and non-blast controls. Blast-exposed subjects performed substantially worse than non-blast controls in an auditory selective attention task in which listeners classified the melodic contour (rising, falling, or “zig-zagging”) of one of three simultaneous, competing tone sequences. Salient pitch and spatial differences made for easy segregation of the three concurrent melodies. Poor performance in the blast-exposed subjects was associated with weaker evoked response potentials (ERPs) in frontal EEG channels, as well as a failure of attention to enhance the neural responses evoked by a sequence when it was the target compared to when it was a distractor. These results suggest that communication problems in these listeners cannot be explained by compromised sensory representations in the auditory periphery, but rather point to lingering blast-induced damage to cortical networks implicated in the control of attention. Because all study participants also suffered from post-traumatic disorder (PTSD), follow-up studies are required to tease apart the contributions of PTSD and blast-induced injury on cognitive performance. PMID:27815131
Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J
2014-01-01
Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.
Effect of simulated bilateral cochlear distortion on speech discrimination in normal subjects.
Hood, J D; Prasher, D K
1990-01-01
Bilateral sensorineural hearing loss may introduce grossly dissimilar cochlear distortion at the two ears, causing abnormal demands to be made upon the cortical analytical centres which normally receive congruent information. As a result, the prescription of binaural hearing aids may be a handicap rather than a help. In order to explore this possibility, 10 normal subjects were presented with simulated, dissimilar cochlear distortion at the two ears. Discrimination scores with binaural presentation were poorer than the best monaural score and there were clear indications that in the former, subjects selectively attended to one ear and neglected the other. In contrast, binaural presentation of the same simulated distortion resulted in a significant improvement, compared with the monaural discrimination score. Inability of the cortex to contend with discongruent speech input from the two ears may be a factor contributing to the rejection of binaural hearing aids in some individuals.
Cortical Correlates of Binaural Temporal Processing Deficits in Older Adults.
Eddins, Ann Clock; Eddins, David A
This study was designed to evaluate binaural temporal processing in young and older adults using a binaural masking level difference (BMLD) paradigm. Using behavioral and electrophysiological measures within the same listeners, a series of stimulus manipulations was used to evaluate the relative contribution of binaural temporal fine-structure and temporal envelope cues. We evaluated the hypotheses that age-related declines in the BMLD task would be more strongly associated with temporal fine-structure than envelope cues and that age-related declines in behavioral measures would be correlated with cortical auditory evoked potential (CAEP) measures. Thirty adults participated in the study, including 10 young normal-hearing, 10 older normal-hearing, and 10 older hearing-impaired adults with bilaterally symmetric, mild-to-moderate sensorineural hearing loss. Behavioral and CAEP thresholds were measured for diotic (So) and dichotic (Sπ) tonal signals presented in continuous diotic (No) narrowband noise (50-Hz wide) maskers. Temporal envelope cues were manipulated by using two different narrowband maskers; Gaussian noise (GN) with robust envelope fluctuations and low-noise noise (LNN) with minimal envelope fluctuations. The potential to use temporal fine-structure cues was controlled by varying the signal frequency (500 or 4000 Hz), thereby relying on the natural decline in phase-locking with increasing frequency. Behavioral and CAEP thresholds were similar across groups for diotic conditions, while the masking release in dichotic conditions was larger for younger than for older participants. Across all participants, BMLDs were larger for GN than LNN and for 500-Hz than for 4000-Hz conditions, where envelope and fine-structure cues were most salient, respectively. Specific age-related differences were demonstrated for 500-Hz dichotic conditions in GN and LNN, reflecting reduced binaural temporal fine-structure coding. No significant age effects were observed for 4000-Hz dichotic conditions, consistent with similar use of binaural temporal envelope cues across age in these conditions. For all groups, thresholds and derived BMLD values obtained using the behavioral and CAEP methods were strongly correlated, supporting the notion that CAEP measures may be useful as an objective index of age-related changes in binaural temporal processing. These results demonstrate an age-related decline in the processing of binaural temporal fine-structure cues with preserved temporal envelope coding that was similar with and without mild-to-moderate peripheral hearing loss. Such age-related changes can be reliably indexed by both behavioral and CAEP measures in young and older adults.
Aging and Cortical Mechanisms of Speech Perception in Noise
ERIC Educational Resources Information Center
Wong, Patrick C. M.; Jin, James Xumin; Gunasekera, Geshri M.; Abel, Rebekah; Lee, Edward R.; Dhar, Sumitrajit
2009-01-01
Spoken language processing in noisy environments, a hallmark of the human brain, is subject to age-related decline, even when peripheral hearing might be intact. The present study examines the cortical cerebral hemodynamics (measured by fMRI) associated with such processing in the aging brain. Younger and older subjects identified single words in…
Purdy, Suzanne C.; Wanigasekara, Iruni; Cañete, Oscar M.; Moore, Celia; McCann, Clare M.
2016-01-01
Aphasia is an acquired language impairment affecting speaking, listening, reading, and writing. Aphasia occurs in about a third of patients who have ischemic stroke and significantly affects functional recovery and return to work. Stroke is more common in older individuals but also occurs in young adults and children. Because people experiencing a stroke are typically aged between 65 and 84 years, hearing loss is common and can potentially interfere with rehabilitation. There is some evidence for increased risk and greater severity of sensorineural hearing loss in the stroke population and hence it has been recommended that all people surviving a stroke should have a hearing test. Auditory processing difficulties have also been reported poststroke. The International Classification of Functioning, Disability and Health (ICF) can be used as a basis for describing the effect of aphasia, hearing loss, and auditory processing difficulties on activities and participation. Effects include reduced participation in activities outside the home such as work and recreation and difficulty engaging in social interaction and communicating needs. A case example of a young man (M) in his 30s who experienced a left-hemisphere ischemic stroke is presented. M has normal hearing sensitivity but has aphasia and auditory processing difficulties based on behavioral and cortical evoked potential measures. His principal goal is to return to work. Although auditory processing difficulties (and hearing loss) are acknowledged in the literature, clinical protocols typically do not specify routine assessment. The literature and the case example presented here suggest a need for further research in this area and a possible change in practice toward more routine assessment of auditory function post-stroke. PMID:27489401
Individual Differences Reveal Correlates of Hidden Hearing Deficits
Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G.
2015-01-01
Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of “normal hearing.” PMID:25653371
Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.
Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth
2017-08-09
Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate anatomically distinct cortical representations of modulated noise in normal-hearing and hearing-impaired listeners. This work provides the first link among hearing thresholds, the amplitude of cortical representations of modulated sounds, and the ability to understand speech in modulated background noise. In light of previous work, we propose that magnified cortical representations of modulated sounds disrupt the separation of speech from modulated background noise in auditory cortex. Copyright © 2017 Millman et al.
Kelly, R R; Tomlison-Keasey, C
1976-12-01
Eleven hearing-impaired children and 11 normal-hearing children (mean = four years 11 months) were visually presented familiar items in either picture or word form. Subjects were asked to recognize the stimuli they had seen from cue cards consisting of pictures or words. They were then asked to recall the sequence of stimuli by arranging the cue cards selected. The hearing-impaired group and normal-hearing subjects performed differently with the picture/picture (P/P) and word/word (W/W) modes in the recognition phase. The hearing impaired performed equally well with both modes (P/P and W/W), while the normal hearing did significantly better on the P/P mode. Furthermore, the normal-hearing group showed no difference in processing like modes (P/P and W/W) when compared to unlike modes (W/P and P/W). In contrast, the hearing-impaired subjects did better on like modes. The results were interpreted, in part, as supporting the position that young normal-hearing children dual code their visual information better than hearing-impaired children.
Preservation of Auditory P300-Like Potentials in Cortical Deafness
Cavinato, Marianna; Rigon, Jessica; Volpato, Chiara; Semenza, Carlo; Piccione, Francesco
2012-01-01
The phenomenon of blindsight has been largely studied and refers to residual abilities of blind patients without an acknowledged visual awareness. Similarly, “deaf hearing” might represent a further example of dissociation between detection and perception of sounds. Here we report the rare case of a patient with a persistent and complete cortical deafness caused by damage to the bilateral temporo-parietal lobes who occasionally showed unexpected reactions to environmental sounds despite she denied hearing. We applied for the first time electrophysiological techniques to better understand auditory processing and perceptual awareness of the patient. While auditory brainstem responses were within normal limits, no middle- and long-latency waveforms could be identified. However, event-related potentials showed conflicting results. While the Mismatch Negativity could not be evoked, robust P3-like waveforms were surprisingly found in the latency range of 600–700 ms. The generation of P3-like potentials, despite extensive destruction of the auditory cortex, might imply the integrity of independent circuits necessary to process auditory stimuli even in the absence of consciousness of sound. Our results support the reverse hierarchy theory that asserts that the higher levels of the hierarchy are immediately available for perception, while low-level information requires more specific conditions. The accurate characterization in terms of anatomy and neurophysiology of the auditory lesions might facilitate understanding of the neural substrates involved in deaf-hearing. PMID:22272260
Liang, Maojin; Chen, Yuebo; Zhao, Fei; Zhang, Junpeng; Liu, Jiahao; Zhang, Xueyuan; Cai, Yuexin; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing
2017-09-01
Although visual processing recruitment of the auditory cortices has been reported previously in prelingually deaf children who have a rapidly developing brain and no auditory processing, the visual processing recruitment of auditory cortices might be different in processing different visual stimuli and may affect cochlear implant (CI) outcomes. Ten prelingually deaf children, 4 to 6 years old, were recruited for the study. Twenty prelingually deaf subjects, 4 to 6 years old with CIs for 1 year, were also recruited; 10 with well-performing CIs, 10 with poorly performing CIs. Ten age and sex-matched normal-hearing children were recruited as controls. Visual ("sound" photo [photograph with imaginative sound] and "nonsound" photo [photograph without imaginative sound]) evoked potentials were measured in all subjects. P1 at Oz and N1 at the bilateral temporal-frontal areas (FC3 and FC4) were compared. N1 amplitudes were strongest in the deaf children, followed by those with poorly performing CIs, controls and those with well-performing CIs. There was no significant difference between controls and those with well-performing CIs. "Sound" photo stimuli evoked a stronger N1 than "nonsound" photo stimuli. Further analysis showed that only at FC4 in deaf subjects and those with poorly performing CIs were the N1 responses to "sound" photo stimuli stronger than those to "nonsound" photo stimuli. No significant difference was found for the FC3 and FC4 areas. No significant difference was found in N1 latencies and P1 amplitudes or latencies. The results indicate enhanced visual recruitment of the auditory cortices in prelingually deaf children. Additionally, the decrement in visual recruitment of auditory cortices was related to good CI outcomes.
Individual differences reveal correlates of hidden hearing deficits.
Bharadwaj, Hari M; Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G
2015-02-04
Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of "normal hearing." Copyright © 2015 the authors 0270-6474/15/352161-12$15.00/0.
Acute auditory agnosia as the presenting hearing disorder in MELAS.
Miceli, Gabriele; Conti, Guido; Cianfoni, Alessandro; Di Giacopo, Raffaella; Zampetti, Patrizia; Servidei, Serenella
2008-12-01
MELAS is commonly associated with peripheral hearing loss. Auditory agnosia is a rare cortical auditory impairment, usually due to bilateral temporal damage. We document, for the first time, auditory agnosia as the presenting hearing disorder in MELAS. A young woman with MELAS (A3243G mtDNA mutation) suffered from acute cortical hearing damage following a single stroke-like episode, in the absence of previous hearing deficits. Audiometric testing showed marked central hearing impairment and very mild sensorineural hearing loss. MRI documented bilateral, acute lesions to superior temporal regions. Neuropsychological tests demonstrated auditory agnosia without aphasia. Our data and a review of published reports show that cortical auditory disorders are relatively frequent in MELAS, probably due to the strikingly high incidence of bilateral and symmetric damage following stroke-like episodes. Acute auditory agnosia can be the presenting hearing deficit in MELAS and, conversely, MELAS should be suspected in young adults with sudden hearing loss.
Unilateral hearing during development: hemispheric specificity in plastic reorganizations
Kral, Andrej; Heid, Silvia; Hubka, Peter; Tillein, Jochen
2013-01-01
The present study investigates the hemispheric contributions of neuronal reorganization following early single-sided hearing (unilateral deafness). The experiments were performed on ten cats from our colony of deaf white cats. Two were identified in early hearing screening as unilaterally congenitally deaf. The remaining eight were bilaterally congenitally deaf, unilaterally implanted at different ages with a cochlear implant. Implanted animals were chronically stimulated using a single-channel portable signal processor for two to five months. Microelectrode recordings were performed at the primary auditory cortex under stimulation at the hearing and deaf ear with bilateral cochlear implants. Local field potentials (LFPs) were compared at the cortex ipsilateral and contralateral to the hearing ear. The focus of the study was on the morphology and the onset latency of the LFPs. With respect to morphology of LFPs, pronounced hemisphere-specific effects were observed. Morphology of amplitude-normalized LFPs for stimulation of the deaf and the hearing ear was similar for responses recorded at the same hemisphere. However, when comparisons were performed between the hemispheres, the morphology was more dissimilar even though the same ear was stimulated. This demonstrates hemispheric specificity of some cortical adaptations irrespective of the ear stimulated. The results suggest a specific adaptation process at the hemisphere ipsilateral to the hearing ear, involving specific (down-regulated inhibitory) mechanisms not found in the contralateral hemisphere. Finally, onset latencies revealed that the sensitive period for the cortex ipsilateral to the hearing ear is shorter than that for the contralateral cortex. Unilateral hearing experience leads to a functionally-asymmetric brain with different neuronal reorganizations and different sensitive periods involved. PMID:24348345
Unilateral hearing during development: hemispheric specificity in plastic reorganizations.
Kral, Andrej; Heid, Silvia; Hubka, Peter; Tillein, Jochen
2013-01-01
The present study investigates the hemispheric contributions of neuronal reorganization following early single-sided hearing (unilateral deafness). The experiments were performed on ten cats from our colony of deaf white cats. Two were identified in early hearing screening as unilaterally congenitally deaf. The remaining eight were bilaterally congenitally deaf, unilaterally implanted at different ages with a cochlear implant. Implanted animals were chronically stimulated using a single-channel portable signal processor for two to five months. Microelectrode recordings were performed at the primary auditory cortex under stimulation at the hearing and deaf ear with bilateral cochlear implants. Local field potentials (LFPs) were compared at the cortex ipsilateral and contralateral to the hearing ear. The focus of the study was on the morphology and the onset latency of the LFPs. With respect to morphology of LFPs, pronounced hemisphere-specific effects were observed. Morphology of amplitude-normalized LFPs for stimulation of the deaf and the hearing ear was similar for responses recorded at the same hemisphere. However, when comparisons were performed between the hemispheres, the morphology was more dissimilar even though the same ear was stimulated. This demonstrates hemispheric specificity of some cortical adaptations irrespective of the ear stimulated. The results suggest a specific adaptation process at the hemisphere ipsilateral to the hearing ear, involving specific (down-regulated inhibitory) mechanisms not found in the contralateral hemisphere. Finally, onset latencies revealed that the sensitive period for the cortex ipsilateral to the hearing ear is shorter than that for the contralateral cortex. Unilateral hearing experience leads to a functionally-asymmetric brain with different neuronal reorganizations and different sensitive periods involved.
Speech-evoked auditory brainstem responses in children with hearing loss.
Koravand, Amineh; Al Osman, Rida; Rivest, Véronique; Poulin, Catherine
2017-08-01
The main objective of the present study was to investigate subcortical auditory processing in children with sensorineural hearing loss. Auditory Brainstem Responses (ABRs) were recorded using click and speech/da/stimuli. Twenty-five children, aged 6-14 years old, participated in the study: 13 with normal hearing acuity and 12 with sensorineural hearing loss. No significant differences were observed for the click-evoked ABRs between normal hearing and hearing-impaired groups. For the speech-evoked ABRs, no significant differences were found for the latencies of the following responses between the two groups: onset (V and A), transition (C), one of the steady-state wave (F), and offset (O). However, the latency of the steady-state waves (D and E) was significantly longer for the hearing-impaired compared to the normal hearing group. Furthermore, the amplitude of the offset wave O and of the envelope frequency response (EFR) of the speech-evoked ABRs was significantly larger for the hearing-impaired compared to the normal hearing group. Results obtained from the speech-evoked ABRs suggest that children with a mild to moderately-severe sensorineural hearing loss have a specific pattern of subcortical auditory processing. Our results show differences for the speech-evoked ABRs in normal hearing children compared to hearing-impaired children. These results add to the body of the literature on how children with hearing loss process speech at the brainstem level. Copyright © 2017 Elsevier B.V. All rights reserved.
Auditory Evoked Potentials with Different Speech Stimuli: a Comparison and Standardization of Values
Didoné, Dayane Domeneghini; Oppitz, Sheila Jacques; Folgearini, Jordana; Biaggio, Eliara Pinto Vieira; Garcia, Michele Vargas
2016-01-01
Introduction Long Latency Auditory Evoked Potentials (LLAEP) with speech sounds has been the subject of research, as these stimuli would be ideal to check individualś detection and discrimination. Objective The objective of this study is to compare and describe the values of latency and amplitude of cortical potentials for speech stimuli in adults with normal hearing. Methods The sample population included 30 normal hearing individuals aged between 18 and 32 years old with ontological disease and auditory processing. All participants underwent LLAEP search using pairs of speech stimuli (/ba/ x /ga/, /ba/ x /da/, and /ba/ x /di/. The authors studied the LLAEP using binaural stimuli at an intensity of 75dBNPS. In total, they used 300 stimuli were used (∼60 rare and 240 frequent) to obtain the LLAEP. Individuals received guidance to count the rare stimuli. The authors analyzed latencies of potential P1, N1, P2, N2, and P300, as well as the ampleness of P300. Results The mean age of the group was approximately 23 years. The averages of cortical potentials vary according to different speech stimuli. The N2 latency was greater for /ba/ x /di/ and P300 latency was greater for /ba/ x /ga/. Considering the overall average amplitude, it ranged from 5.35 and 7.35uV for different speech stimuli. Conclusion It was possible to obtain the values of latency and amplitude for different speech stimuli. Furthermore, the N2 component showed higher latency with the / ba / x / di / stimulus and P300 for /ba/ x / ga /. PMID:27096012
Cardon, Garrett; Sharma, Anu
2013-01-01
Objective We examined cortical auditory development and behavioral outcomes in children with ANSD fitted with cochlear implants (CI). Design Cortical maturation, measured by P1 cortical auditory evoked potential (CAEP) latency, was regressed against scores on the Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS). Implantation age was also considered in relation to CAEP findings. Study Sample Cross-sectional and longitudinal samples of 24 and 11 children, respectively, with ANSD fitted with CIs. Result P1 CAEP responses were present in all children after implantation, though previous findings suggest that only 50-75% of ANSD children with hearing aids show CAEP responses. P1 CAEP latency was significantly correlated with participants' IT-MAIS scores. Furthermore, more children implanted before age two years showed normal P1 latencies, while those implanted later mainly showed delayed latencies. Longitudinal analysis revealed that most children showed normal or improved cortical maturation after implantation. Conclusion Cochlear implantation resulted in measureable cortical auditory development for all children with ANSD. Children fitted with CIs under age two years were more likely to show age-appropriate CAEP responses within 6 months after implantation, suggesting a possible sensitive period for cortical auditory development in ANSD. That CAEP responses were correlated with behavioral outcome highlights their clinical decision-making utility. PMID:23819618
Stropahl, Maren; Chen, Ling-Chia; Debener, Stefan
2017-01-01
With the advances of cochlear implant (CI) technology, many deaf individuals can partially regain their hearing ability. However, there is a large variation in the level of recovery. Cortical changes induced by hearing deprivation and restoration with CIs have been thought to contribute to this variation. The current review aims to identify these cortical changes in postlingually deaf CI users and discusses their maladaptive or adaptive relationship to the CI outcome. Overall, intra-modal and cross-modal reorganization patterns have been identified in postlingually deaf CI users in visual and in auditory cortex. Even though cross-modal activation in auditory cortex is considered as maladaptive for speech recovery in CI users, a similar activation relates positively to lip reading skills. Furthermore, cross-modal activation of the visual cortex seems to be adaptive for speech recognition. Currently available evidence points to an involvement of further brain areas and suggests that a focus on the reversal of visual take-over of the auditory cortex may be too limited. Future investigations should consider expanded cortical as well as multi-sensory processing and capture different hierarchical processing steps. Furthermore, prospective longitudinal designs are needed to track the dynamics of cortical plasticity that takes place before and after implantation. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Effects of aging and sensory loss on glial cells in mouse visual and auditory cortices.
Tremblay, Marie-Ève; Zettel, Martha L; Ison, James R; Allen, Paul D; Majewska, Ania K
2012-04-01
Normal aging is often accompanied by a progressive loss of receptor sensitivity in hearing and vision, whose consequences on cellular function in cortical sensory areas have remained largely unknown. By examining the primary auditory (A1) and visual (V1) cortices in two inbred strains of mice undergoing either age-related loss of audition (C57BL/6J) or vision (CBA/CaJ), we were able to describe cellular and subcellular changes that were associated with normal aging (occurring in A1 and V1 of both strains) or specifically with age-related sensory loss (only in A1 of C57BL/6J or V1 of CBA/CaJ), using immunocytochemical electron microscopy and light microscopy. While the changes were subtle in neurons, glial cells and especially microglia were transformed in aged animals. Microglia became more numerous and irregularly distributed, displayed more variable cell body and process morphologies, occupied smaller territories, and accumulated phagocytic inclusions that often displayed ultrastructural features of synaptic elements. Additionally, evidence of myelination defects were observed, and aged oligodendrocytes became more numerous and were more often encountered in contiguous pairs. Most of these effects were profoundly exacerbated by age-related sensory loss. Together, our results suggest that the age-related alteration of glial cells in sensory cortical areas can be accelerated by activity-driven central mechanisms that result from an age-related loss of peripheral sensitivity. In light of our observations, these age-related changes in sensory function should be considered when investigating cellular, cortical, and behavioral functions throughout the lifespan in these commonly used C57BL/6J and CBA/CaJ mouse models. Copyright © 2012 Wiley Periodicals, Inc.
Effects of aging and sensory loss on glial cells in mouse visual and auditory cortices
Tremblay, Marie-Ève; Zettel, Martha L.; Ison, James R.; Allen, Paul D.; Majewska, Ania K.
2011-01-01
Normal aging is often accompanied by a progressive loss of receptor sensitivity in hearing and vision, whose consequences on cellular function in cortical sensory areas have remained largely unknown. By examining the primary auditory (A1) and visual (V1) cortices in two inbred strains of mice undergoing either age-related loss of audition (C57BL/6J) or vision (CBA/CaJ), we were able to describe cellular and subcellular changes that were associated with normal aging (occurring in A1 and V1 of both strains) or specifically with age-related sensory loss (only in A1 of C57BL/6J or V1 of CBA/CaJ), using immunocytochemical electron microscopy and light microscopy. While the changes were subtle in neurons, glial cells and especially microglia were transformed in aged animals. Microglia became more numerous and irregularly distributed, displayed more variable cell body and process morphologies, occupied smaller territories, and accumulated phagocytic inclusions that often displayed ultrastructural features of synaptic elements. Additionally, evidence of myelination defects were observed, and aged oligodendrocytes became more numerous and were more often encountered in contiguous pairs. Most of these effects were profoundly exacerbated by age-related sensory loss. Together, our results suggest that the age-related alteration of glial cells in sensory cortical areas can be accelerated by activity-driven central mechanisms that result from an age-related loss of peripheral sensitivity. In light of our observations, these age-related changes in sensory function should be considered when investigating cellular, cortical and behavioral functions throughout the lifespan in these commonly used C57BL/6J and CBA/CaJ mouse models. PMID:22223464
Huber, Rainer; Bisitz, Thomas; Gerkmann, Timo; Kiessling, Jürgen; Meister, Hartmut; Kollmeier, Birger
2018-06-01
The perceived qualities of nine different single-microphone noise reduction (SMNR) algorithms were to be evaluated and compared in subjective listening tests with normal hearing and hearing impaired (HI) listeners. Speech samples added with traffic noise or with party noise were processed by the SMNR algorithms. Subjects rated the amount of speech distortions, intrusiveness of background noise, listening effort and overall quality, using a simplified MUSHRA (ITU-R, 2003 ) assessment method. 18 normal hearing and 18 moderately HI subjects participated in the study. Significant differences between the rating behaviours of the two subject groups were observed: While normal hearing subjects clearly differentiated between different SMNR algorithms, HI subjects rated all processed signals very similarly. Moreover, HI subjects rated speech distortions of the unprocessed, noisier signals as being more severe than the distortions of the processed signals, in contrast to normal hearing subjects. It seems harder for HI listeners to distinguish between additive noise and speech distortions or/and they might have a different understanding of the term "speech distortion" than normal hearing listeners have. The findings confirm that the evaluation of SMNR schemes for hearing aids should always involve HI listeners.
Cortical Reorganisation during a 30-Week Tinnitus Treatment Program
McMahon, Catherine M.; Ibrahim, Ronny K.; Mathur, Ankit
2016-01-01
Subjective tinnitus is characterised by the conscious perception of a phantom sound. Previous studies have shown that individuals with chronic tinnitus have disrupted sound-evoked cortical tonotopic maps, time-shifted evoked auditory responses, and altered oscillatory cortical activity. The main objectives of this study were to: (i) compare sound-evoked brain responses and cortical tonotopic maps in individuals with bilateral tinnitus and those without tinnitus; and (ii) investigate whether changes in these sound-evoked responses occur with amelioration of the tinnitus percept during a 30-week tinnitus treatment program. Magnetoencephalography (MEG) recordings of 12 bilateral tinnitus participants and 10 control normal-hearing subjects reporting no tinnitus were recorded at baseline, using 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz tones presented monaurally at 70 dBSPL through insert tube phones. For the tinnitus participants, MEG recordings were obtained at 5-, 10-, 20- and 30- week time points during tinnitus treatment. Results for the 500 Hz and 1000 Hz sources (where hearing thresholds were within normal limits for all participants) showed that the tinnitus participants had a significantly larger and more anteriorly located source strengths when compared to the non-tinnitus participants. During the 30-week tinnitus treatment, the participants’ 500 Hz and 1000 Hz source strengths remained higher than the non-tinnitus participants; however, the source locations shifted towards the direction recorded from the non-tinnitus control group. Further, in the left hemisphere, there was a time-shifted association between the trajectory of change of the individual’s objective (source strength and anterior-posterior source location) and subjective measures (using tinnitus reaction questionnaire, TRQ). The differences in source strength between the two groups suggest that individuals with tinnitus have enhanced central gain which is not significantly influenced by the tinnitus treatment, and may result from the hearing loss per se. On the other hand, the shifts in the tonotopic map towards the non-tinnitus participants’ source location suggests that the tinnitus treatment might reduce the disruptions in the map, presumably produced by the tinnitus percept directly or indirectly. Further, the similarity in the trajectory of change across the objective and subjective parameters after time-shifting the perceptual changes by 5 weeks suggests that during or following treatment, perceptual changes in the tinnitus percept may precede neurophysiological changes. Subgroup analyses conducted by magnitude of hearing loss suggest that there were no differences in the 500 Hz and 1000 Hz source strength amplitudes for the mild-moderate compared with the mild-severe hearing loss subgroup, although the mean source strength was consistently higher for the mild-severe subgroup. Further, the mild-severe subgroup had 500 Hz and 1000 Hz source locations located more anteriorly (i.e., more disrupted compared to the control group) compared to the mild-moderate group, although this was trending towards significance only for the 500Hz left hemisphere source. While the small numbers of participants within the subgroup analyses reduce the statistical power, this study suggests that those with greater magnitudes of hearing loss show greater cortical disruptions with tinnitus and that tinnitus treatment appears to reduce the tonotopic map disruptions but not the source strength (or central gain). PMID:26901425
Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Bauer, Julia; Widmer, Susann; Meyer, Martin
2018-01-01
Cognitive abilities such as attention or working memory can support older adults during speech perception. However, cognitive abilities as well as speech perception decline with age, leading to the expenditure of effort during speech processing. This longitudinal study therefore investigated age-related differences in electrophysiological processes during speech discrimination and assessed the extent of enhancement to such cognitive auditory processes through repeated auditory exposure. For that purpose, accuracy and reaction time were compared between 13 older adults (62-76 years) and 15 middle-aged (28-52 years) controls in an active oddball paradigm which was administered at three consecutive measurement time points at an interval of 2 wk, while EEG was recorded. As a standard stimulus, the nonsense syllable /'a:ʃa/was used, while the nonsense syllable /'a:sa/ and a morphing between /'a:ʃa/ and /'a:sa/ served as deviants. N2b and P3b ERP responses were evaluated as a function of age, deviant, and measurement time point using a data-driven topographical microstate analysis. From middle age to old age, age-related decline in attentive perception (as reflected in the N2b-related microstates) and in memory updating and attentional processes (as reflected in the P3b-related microstates) was found, as indicated by both lower neural responses and later onsets of the respective cortical networks, and in age-related changes in frontal activation during attentional stimulus processing. Importantly, N2b- and P3b-related microstates changed as a function of repeated stimulus exposure in both groups. This research therefore suggests that experience with auditory stimuli can support auditory neurocognitive processes in normal hearing adults into advanced age. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Harris, Richard W.; And Others
1988-01-01
A two-microphone adaptive digital noise cancellation technique improved word-recognition ability for 20 normal and 12 hearing-impaired adults by reducing multitalker speech babble and speech spectrum noise 18-22 dB. Word recognition improvements averaged 37-50 percent for normal and 27-40 percent for hearing-impaired subjects. Improvement was best…
Residual Inhibition Functions Overlap Tinnitus Spectra and the Region of Auditory Threshold Shift
Moffat, Graeme; Baumann, Michael; Ward, Lawrence M.
2008-01-01
Animals exposed to noise trauma show augmented synchronous neural activity in tonotopically reorganized primary auditory cortex consequent on hearing loss. Diminished intracortical inhibition in the reorganized region appears to enable synchronous network activity that develops when deafferented neurons begin to respond to input via their lateral connections. In humans with tinnitus accompanied by hearing loss, this process may generate a phantom sound that is perceived in accordance with the location of the affected neurons in the cortical place map. The neural synchrony hypothesis predicts that tinnitus spectra, and heretofore unmeasured “residual inhibition functions” that relate residual tinnitus suppression to the center frequency of masking sounds, should cover the region of hearing loss in the audiogram. We confirmed these predictions in two independent cohorts totaling 90 tinnitus subjects, using computer-based tools designed to assess the psychoacoustic properties of tinnitus. Tinnitus spectra and residual inhibition functions for depth and duration increased with the amount of threshold shift over the region of hearing impairment. Residual inhibition depth was shallower when the masking sounds that were used to induce residual inhibition showed decreased correspondence with the frequency spectrum and bandwidth of the tinnitus. These findings suggest that tinnitus and its suppression in residual inhibition depend on processes that span the region of hearing impairment and not on mechanisms that enhance cortical representations for sound frequencies at the audiometric edge. Hearing thresholds measured in age-matched control subjects without tinnitus implicated hearing loss as a factor in tinnitus, although elevated thresholds alone were not sufficient to cause tinnitus. PMID:18712566
Central Auditory Development: Evidence from CAEP Measurements in Children Fit with Cochlear Implants
ERIC Educational Resources Information Center
Dorman, Michael F.; Sharma, Anu; Gilley, Phillip; Martin, Kathryn; Roland, Peter
2007-01-01
In normal-hearing children the latency of the P1 component of the cortical evoked response to sound varies as a function of age and, thus, can be used as a biomarker for maturation of central auditory pathways. We assessed P1 latency in 245 congenitally deaf children fit with cochlear implants following various periods of auditory deprivation. If…
Leite, Renata Aparecida; Magliaro, Fernanda Cristina Leite; Raimundo, Jeziela Cristina; Bento, Ricardo Ferreira; Matas, Carla Gentile
2018-02-19
The objective of this study was to compare long-latency auditory evoked potentials before and after hearing aid fittings in children with sensorineural hearing loss compared with age-matched children with normal hearing. Thirty-two subjects of both genders aged 7 to 12 years participated in this study and were divided into two groups as follows: 14 children with normal hearing were assigned to the control group (mean age 9 years and 8 months), and 18 children with mild to moderate symmetrical bilateral sensorineural hearing loss were assigned to the study group (mean age 9 years and 2 months). The children underwent tympanometry, pure tone and speech audiometry and long-latency auditory evoked potential testing with speech and tone burst stimuli. The groups were assessed at three time points. The study group had a lower percentage of positive responses, lower P1-N1 and P2-N2 amplitudes (speech and tone burst), and increased latencies for the P1 and P300 components following the tone burst stimuli. They also showed improvements in long-latency auditory evoked potentials (with regard to both the amplitude and presence of responses) after hearing aid use. Alterations in the central auditory pathways can be identified using P1-N1 and P2-N2 amplitude components, and the presence of these components increases after a short period of auditory stimulation (hearing aid use). These findings emphasize the importance of using these amplitude components to monitor the neuroplasticity of the central auditory nervous system in hearing aid users.
Leite, Renata Aparecida; Magliaro, Fernanda Cristina Leite; Raimundo, Jeziela Cristina; Bento, Ricardo Ferreira; Matas, Carla Gentile
2018-01-01
OBJECTIVE: The objective of this study was to compare long-latency auditory evoked potentials before and after hearing aid fittings in children with sensorineural hearing loss compared with age-matched children with normal hearing. METHODS: Thirty-two subjects of both genders aged 7 to 12 years participated in this study and were divided into two groups as follows: 14 children with normal hearing were assigned to the control group (mean age 9 years and 8 months), and 18 children with mild to moderate symmetrical bilateral sensorineural hearing loss were assigned to the study group (mean age 9 years and 2 months). The children underwent tympanometry, pure tone and speech audiometry and long-latency auditory evoked potential testing with speech and tone burst stimuli. The groups were assessed at three time points. RESULTS: The study group had a lower percentage of positive responses, lower P1-N1 and P2-N2 amplitudes (speech and tone burst), and increased latencies for the P1 and P300 components following the tone burst stimuli. They also showed improvements in long-latency auditory evoked potentials (with regard to both the amplitude and presence of responses) after hearing aid use. CONCLUSIONS: Alterations in the central auditory pathways can be identified using P1-N1 and P2-N2 amplitude components, and the presence of these components increases after a short period of auditory stimulation (hearing aid use). These findings emphasize the importance of using these amplitude components to monitor the neuroplasticity of the central auditory nervous system in hearing aid users. PMID:29466495
Nonlinear frequency compression: effects on sound quality ratings of speech and music.
Parsa, Vijay; Scollie, Susan; Glista, Danielle; Seelisch, Andreas
2013-03-01
Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality.
Martin, B A; Sigal, A; Kurtzberg, D; Stapells, D R
1997-03-01
This study investigated the effects of decreased audibility produced by high-pass noise masking on cortical event-related potentials (ERPs) N1, N2, and P3 to the speech sounds /ba/and/da/presented at 65 and 80 dB SPL. Normal-hearing subjects pressed a button in response to the deviant sound in an oddball paradigm. Broadband masking noise was presented at an intensity sufficient to completely mask the response to the 65-dB SPL speech sounds, and subsequently high-pass filtered at 4000, 2000, 1000, 500, and 250 Hz. With high-pass masking noise, pure-tone behavioral thresholds increased by an average of 38 dB at the high-pass cutoff and by 50 dB one octave above the cutoff frequency. Results show that as the cutoff frequency of the high-pass masker was lowered, ERP latencies to speech sounds increased and amplitudes decreased. The cutoff frequency where these changes first occurred and the rate of the change differed for N1 compared to N2, P3, and the behavioral measures. N1 showed gradual changes as the masker cutoff frequency was lowered. N2, P3, and behavioral measures showed marked changes below a masker cutoff of 2000 Hz. These results indicate that the decreased audibility resulting from the noise masking affects the various ERP components in a differential manner. N1 is related to the presence of audible stimulus energy, being present whether audible stimuli are discriminable or not. In contrast, N2 and P3 were absent when the stimuli were audible but not discriminable (i.e., when the second formant transitions were masked), reflecting stimulus discrimination. These data have implications regarding the effects of decreased audibility on cortical processing of speech sounds and for the study of cortical ERPs in populations with hearing impairment.
Behavioral training promotes multiple adaptive processes following acute hearing loss.
Keating, Peter; Rosenior-Patten, Onayomi; Dahmen, Johannes C; Bell, Olivia; King, Andrew J
2016-03-23
The brain possesses a remarkable capacity to compensate for changes in inputs resulting from a range of sensory impairments. Developmental studies of sound localization have shown that adaptation to asymmetric hearing loss can be achieved either by reinterpreting altered spatial cues or by relying more on those cues that remain intact. Adaptation to monaural deprivation in adulthood is also possible, but appears to lack such flexibility. Here we show, however, that appropriate behavioral training enables monaurally-deprived adult humans to exploit both of these adaptive processes. Moreover, cortical recordings in ferrets reared with asymmetric hearing loss suggest that these forms of plasticity have distinct neural substrates. An ability to adapt to asymmetric hearing loss using multiple adaptive processes is therefore shared by different species and may persist throughout the lifespan. This highlights the fundamental flexibility of neural systems, and may also point toward novel therapeutic strategies for treating sensory disorders.
Mateer, C A; Rapport, R L; Kettrick, C
1984-01-01
A normally hearing left-handed patient familiar with American Sign Language (ASL) was assessed under sodium amytal conditions and with left cortical stimulation in both oral speech and signed English. Lateralization was mixed but complementary in each language mode: the right hemisphere perfusion severely disrupted motoric aspects of both types of language expression, the left hemisphere perfusion specifically disrupted features of grammatical and semantic usage in each mode of expression. Both semantic and syntactic aspects of oral and signed responses were altered during left posterior temporal-parietal stimulation. Findings are discussed in terms of the neurological organization of ASL and linguistic organization in cases of early left hemisphere damage.
ERIC Educational Resources Information Center
Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker
2016-01-01
Purpose: Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Method: Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13…
Hearing aids: indications, technology, adaptation, and quality control
Hoppe, Ulrich; Hesse, Gerhard
2017-01-01
Hearing loss can be caused by a number of different pathological conditions. Some of them can be successfully treated, mainly by surgery, depending on the individual’s disease process. However, the treatment of chronic sensorineural hearing loss with damaged cochlear structures usually needs hearing rehabilitation by means of technical amplification. During the last two decades tremendous improvements in hearing aid technology led to a higher quality of the hearing rehabilitation process. For example, due to sophisticated signal processing acoustic feedback could be reduced and hence open fitting options are available even for more subjects with higher degrees of hearing loss. In particular for high-frequency hearing loss, the use of open fitting is an option. Both the users’ acceptance and the perceived sound quality were significantly increased by open fittings. However, we are still faced with a low level of readiness in many hearing impaired subjects to accept acoustic amplification. Since ENT specialists play a key-role in hearing aid provision, they should promote early hearing aid rehabilitation and include this in the counselling even in subjects with mild and moderate hearing loss. Recent investigations demonstrated the benefit of early hearing aid use in this group of patients since this may help to reduce subsequent damages as auditory deprivation, social isolation, development of dementia, and cognitive decline. For subjects with tinnitus, hearing aids may also support masking by environmental sounds and enhance cortical inhibition. The present paper describes the latest developments of hearing aid technology and the current state of the art for amplification modalities. Implications for both hearing aid indication and provision are discussed. PMID:29279726
NASA Astrophysics Data System (ADS)
Straka, Małgorzata M.; McMahon, Melissa; Markovitz, Craig D.; Lim, Hubert H.
2014-08-01
Objective. An increasing number of deaf individuals are being implanted with central auditory prostheses, but their performance has generally been poorer than for cochlear implant users. The goal of this study is to investigate stimulation strategies for improving hearing performance with a new auditory midbrain implant (AMI). Previous studies have shown that repeated electrical stimulation of a single site in each isofrequency lamina of the central nucleus of the inferior colliculus (ICC) causes strong suppressive effects in elicited responses within the primary auditory cortex (A1). Here we investigate if improved cortical activity can be achieved by co-activating neurons with different timing and locations across an ICC lamina and if this cortical activity varies across A1. Approach. We electrically stimulated two sites at different locations across an isofrequency ICC lamina using varying delays in ketamine-anesthetized guinea pigs. We recorded and analyzed spike activity and local field potentials across different layers and locations of A1. Results. Co-activating two sites within an isofrequency lamina with short inter-pulse intervals (<5 ms) could elicit cortical activity that is enhanced beyond a linear summation of activity elicited by the individual sites. A significantly greater extent of normalized cortical activity was observed for stimulation of the rostral-lateral region of an ICC lamina compared to the caudal-medial region. We did not identify any location trends across A1, but the most cortical enhancement was observed in supragranular layers, suggesting further integration of the stimuli through the cortical layers. Significance. The topographic organization identified by this study provides further evidence for the presence of functional zones across an ICC lamina with locations consistent with those identified by previous studies. Clinically, these results suggest that co-activating different neural populations in the rostral-lateral ICC rather than the caudal-medial ICC using the AMI may improve or elicit different types of hearing capabilities.
Cortical Auditory Evoked Potentials Recorded From Nucleus Hybrid Cochlear Implant Users.
Brown, Carolyn J; Jeon, Eun Kyung; Chiou, Li-Kuei; Kirby, Benjamin; Karsten, Sue A; Turner, Christopher W; Abbas, Paul J
2015-01-01
Nucleus Hybrid Cochlear Implant (CI) users hear low-frequency sounds via acoustic stimulation and high-frequency sounds via electrical stimulation. This within-subject study compares three different methods of coordinating programming of the acoustic and electrical components of the Hybrid device. Speech perception and cortical auditory evoked potentials (CAEP) were used to assess differences in outcome. The goals of this study were to determine whether (1) the evoked potential measures could predict which programming strategy resulted in better outcome on the speech perception task or was preferred by the listener, and (2) CAEPs could be used to predict which subjects benefitted most from having access to the electrical signal provided by the Hybrid implant. CAEPs were recorded from 10 Nucleus Hybrid CI users. Study participants were tested using three different experimental processor programs (MAPs) that differed in terms of how much overlap there was between the range of frequencies processed by the acoustic component of the Hybrid device and range of frequencies processed by the electrical component. The study design included allowing participants to acclimatize for a period of up to 4 weeks with each experimental program prior to speech perception and evoked potential testing. Performance using the experimental MAPs was assessed using both a closed-set consonant recognition task and an adaptive test that measured the signal-to-noise ratio that resulted in 50% correct identification of a set of 12 spondees presented in background noise. Long-duration, synthetic vowels were used to record both the cortical P1-N1-P2 "onset" response and the auditory "change" response (also known as the auditory change complex [ACC]). Correlations between the evoked potential measures and performance on the speech perception tasks are reported. Differences in performance using the three programming strategies were not large. Peak-to-peak amplitude of the ACC was not found to be sensitive enough to accurately predict the programming strategy that resulted in the best performance on either measure of speech perception. All 10 Hybrid CI users had residual low-frequency acoustic hearing. For all 10 subjects, allowing them to use both the acoustic and electrical signals provided by the implant improved performance on the consonant recognition task. For most subjects, it also resulted in slightly larger cortical change responses. However, the impact that listening mode had on the cortical change responses was small, and again, the correlation between the evoked potential and speech perception results was not significant. CAEPs can be successfully measured from Hybrid CI users. The responses that are recorded are similar to those recorded from normal-hearing listeners. The goal of this study was to see if CAEPs might play a role either in identifying the experimental program that resulted in best performance on a consonant recognition task or in documenting benefit from the use of the electrical signal provided by the Hybrid CI. At least for the stimuli and specific methods used in this study, no such predictive relationship was found.
Pilot study of cognition in children with unilateral hearing loss.
Ead, Banan; Hale, Sandra; DeAlwis, Duneesha; Lieu, Judith E C
2013-11-01
The objective of this study was to obtain preliminary data on the cognitive function of children with unilateral hearing loss in order to identify, quantify, and interpret differences in cognitive and language functions between children with unilateral hearing loss and with normal hearing. Fourteen children ages 9-14 years old (7 with severe-to-profound sensorineural unilateral hearing loss and 7 sibling controls with normal hearing) were administered five tests that assessed cognitive functions of working memory, processing speed, attention, and phonological processing. Mean composite scores for phonological processing were significantly lower for the group with unilateral hearing loss than for controls on one composite and four subtests. The unilateral hearing loss group trended toward worse performance on one additional composite and on two additional phonological processing subtests. The unilateral hearing loss group also performed worse than the control group on the complex letter span task. Analysis examining performance on the two levels of task difficulty revealed a significant main effect of task difficulty and an interaction between task difficulty and group. Cognitive function and phonological processing test results suggest two related deficits associated with unilateral hearing loss: (1) reduced accuracy and efficiency associated with phonological processing, and (2) impaired executive control function when engaged in maintaining verbal information in the face of processing incoming, irrelevant verbal information. These results provide a possible explanation for the educational difficulties experienced by children with unilateral hearing loss. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Cortical Plasticity after Cochlear Implantation
Petersen, B.; Gjedde, A.; Wallentin, M.; Vuust, P.
2013-01-01
The most dramatic progress in the restoration of hearing takes place in the first months after cochlear implantation. To map the brain activity underlying this process, we used positron emission tomography at three time points: within 14 days, three months, and six months after switch-on. Fifteen recently implanted adult implant recipients listened to running speech or speech-like noise in four sequential PET sessions at each milestone. CI listeners with postlingual hearing loss showed differential activation of left superior temporal gyrus during speech and speech-like stimuli, unlike CI listeners with prelingual hearing loss. Furthermore, Broca's area was activated as an effect of time, but only in CI listeners with postlingual hearing loss. The study demonstrates that adaptation to the cochlear implant is highly related to the history of hearing loss. Speech processing in patients whose hearing loss occurred after the acquisition of language involves brain areas associated with speech comprehension, which is not the case for patients whose hearing loss occurred before the acquisition of language. Finally, the findings confirm the key role of Broca's area in restoration of speech perception, but only in individuals in whom Broca's area has been active prior to the loss of hearing. PMID:24377050
Cortical Development and Neuroplasticity in Auditory Neuropathy Spectrum Disorder
Sharma, Anu; Cardon, Garrett
2015-01-01
Cortical development is dependent to a large extent on stimulus-driven input. Auditory Neuropathy Spectrum Disorder (ANSD) is a recently described form of hearing impairment where neural dys-synchrony is the predominant characteristic. Children with ANSD provide a unique platform to examine the effects of asynchronous and degraded afferent stimulation on cortical auditory neuroplasticity and behavioral processing of sound. In this review, we describe patterns of auditory cortical maturation in children with ANSD. The disruption of cortical maturation that leads to these various patterns includes high levels of intra-individual cortical variability and deficits in cortical phase synchronization of oscillatory neural responses. These neurodevelopmental changes, which are constrained by sensitive periods for central auditory maturation, are correlated with behavioral outcomes for children with ANSD. Overall, we hypothesize that patterns of cortical development in children with ANSD appear to be markers of the severity of the underlying neural dys-synchrony, providing prognostic indicators of success of clinical intervention with amplification and/or electrical stimulation. PMID:26070426
Jürgens, Tim; Clark, Nicholas R; Lecluyse, Wendy; Meddis, Ray
2016-01-01
To use a computer model of impaired hearing to explore the effects of a physiologically-inspired hearing-aid algorithm on a range of psychoacoustic measures. A computer model of a hypothetical impaired listener's hearing was constructed by adjusting parameters of a computer model of normal hearing. Absolute thresholds, estimates of compression, and frequency selectivity (summarized to a hearing profile) were assessed using this model with and without pre-processing the stimuli by a hearing-aid algorithm. The influence of different settings of the algorithm on the impaired profile was investigated. To validate the model predictions, the effect of the algorithm on hearing profiles of human impaired listeners was measured. A computer model simulating impaired hearing (total absence of basilar membrane compression) was used, and three hearing-impaired listeners participated. The hearing profiles of the model and the listeners showed substantial changes when the test stimuli were pre-processed by the hearing-aid algorithm. These changes consisted of lower absolute thresholds, steeper temporal masking curves, and sharper psychophysical tuning curves. The hearing-aid algorithm affected the impaired hearing profile of the model to approximate a normal hearing profile. Qualitatively similar results were found with the impaired listeners' hearing profiles.
Hearing after congenital deafness: central auditory plasticity and sensory deprivation.
Kral, A; Hartmann, R; Tillein, J; Heid, S; Klinke, R
2002-08-01
The congenitally deaf cat suffers from a degeneration of the inner ear. The organ of Corti bears no hair cells, yet the auditory afferents are preserved. Since these animals have no auditory experience, they were used as a model for congenital deafness. Kittens were equipped with a cochlear implant at different ages and electro-stimulated over a period of 2.0-5.5 months using a monopolar single-channel compressed analogue stimulation strategy (VIENNA-type signal processor). Following a period of auditory experience, we investigated cortical field potentials in response to electrical biphasic pulses applied by means of the cochlear implant. In comparison to naive unstimulated deaf cats and normal hearing cats, the chronically stimulated animals showed larger cortical regions producing middle-latency responses at or above 300 microV amplitude at the contralateral as well as the ipsilateral auditory cortex. The cortex ipsilateral to the chronically stimulated ear did not show any signs of reduced responsiveness when stimulating the 'untrained' ear through a second cochlear implant inserted in the final experiment. With comparable duration of auditory training, the activated cortical area was substantially smaller if implantation had been performed at an older age of 5-6 months. The data emphasize that young sensory systems in cats have a higher capacity for plasticity than older ones and that there is a sensitive period for the cat's auditory system.
New Perspectives on Assessing Amplification Effects
Souza, Pamela E.; Tremblay, Kelly L.
2006-01-01
Clinicians have long been aware of the range of performance variability with hearing aids. Despite improvements in technology, there remain many instances of well-selected and appropriately fitted hearing aids whereby the user reports minimal improvement in speech understanding. This review presents a multistage framework for understanding how a hearing aid affects performance. Six stages are considered: (1) acoustic content of the signal, (2) modification of the signal by the hearing aid, (3) interaction between sound at the output of the hearing aid and the listener's ear, (4) integrity of the auditory system, (5) coding of available acoustic cues by the listener's auditory system, and (6) correct identification of the speech sound. Within this framework, this review describes methodology and research on 2 new assessment techniques: acoustic analysis of speech measured at the output of the hearing aid and auditory evoked potentials recorded while the listener wears hearing aids. Acoustic analysis topics include the relationship between conventional probe microphone tests and probe microphone measurements using speech, appropriate procedures for such tests, and assessment of signal-processing effects on speech acoustics and recognition. Auditory evoked potential topics include an overview of physiologic measures of speech processing and the effect of hearing loss and hearing aids on cortical auditory evoked potential measurements in response to speech. Finally, the clinical utility of these procedures is discussed. PMID:16959734
Dietz, Mathias; Hohmann, Volker; Jürgens, Tim
2015-01-01
For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch). Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types. PMID:26721918
Seymour, Jenessa L; Low, Kathy A; Maclin, Edward L; Chiarelli, Antonio M; Mathewson, Kyle E; Fabiani, Monica; Gratton, Gabriele; Dye, Matthew W G
2017-01-01
Theories of brain plasticity propose that, in the absence of input from the preferred sensory modality, some specialized brain areas may be recruited when processing information from other modalities, which may result in improved performance. The Useful Field of View task has previously been used to demonstrate that early deafness positively impacts peripheral visual attention. The current study sought to determine the neural changes associated with those deafness-related enhancements in visual performance. Based on previous findings, we hypothesized that recruitment of posterior portions of Brodmann area 22, a brain region most commonly associated with auditory processing, would be correlated with peripheral selective attention as measured using the Useful Field of View task. We report data from severe to profoundly deaf adults and normal-hearing controls who performed the Useful Field of View task while cortical activity was recorded using the event-related optical signal. Behavioral performance, obtained in a separate session, showed that deaf subjects had lower thresholds (i.e., better performance) on the Useful Field of View task. The event-related optical data indicated greater activity for the deaf adults than for the normal-hearing controls during the task in the posterior portion of Brodmann area 22 in the right hemisphere. Furthermore, the behavioral thresholds correlated significantly with this neural activity. This work provides further support for the hypothesis that cross-modal plasticity in deaf individuals appears in higher-order auditory cortices, whereas no similar evidence was obtained for primary auditory areas. It is also the only neuroimaging study to date that has linked deaf-related changes in the right temporal lobe to visual task performance outside of the imaging environment. The event-related optical signal is a valuable technique for studying cross-modal plasticity in deaf humans. The non-invasive and relatively quiet characteristics of this technique have great potential utility in research with clinical populations such as deaf children and adults who have received cochlear or auditory brainstem implants. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Fogerty, Daniel; Ahlstrom, Jayne B.; Bologna, William J.; Dubno, Judy R.
2016-01-01
Purpose: This study investigated how listeners process acoustic cues preserved during sentences interrupted by nonsimultaneous noise that was amplitude modulated by a competing talker. Method: Younger adults with normal hearing and older adults with normal or impaired hearing listened to sentences with consonants or vowels replaced with noise…
Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret
2013-01-02
Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance.
Campbell, Ruth; MacSweeney, Mairéad; Woll, Bencie
2014-01-01
Cochlear implantation (CI) for profound congenital hearing impairment, while often successful in restoring hearing to the deaf child, does not always result in effective speech processing. Exposure to non-auditory signals during the pre-implantation period is widely held to be responsible for such failures. Here, we question the inference that such exposure irreparably distorts the function of auditory cortex, negatively impacting the efficacy of CI. Animal studies suggest that in congenital early deafness there is a disconnection between (disordered) activation in primary auditory cortex (A1) and activation in secondary auditory cortex (A2). In humans, one factor contributing to this functional decoupling is assumed to be abnormal activation of A1 by visual projections-including exposure to sign language. In this paper we show that that this abnormal activation of A1 does not routinely occur, while A2 functions effectively supramodally and multimodally to deliver spoken language irrespective of hearing status. What, then, is responsible for poor outcomes for some individuals with CI and for apparent abnormalities in cortical organization in these people? Since infancy is a critical period for the acquisition of language, deaf children born to hearing parents are at risk of developing inefficient neural structures to support skilled language processing. A sign language, acquired by a deaf child as a first language in a signing environment, is cortically organized like a heard spoken language in terms of specialization of the dominant perisylvian system. However, very few deaf children are exposed to sign language in early infancy. Moreover, no studies to date have examined sign language proficiency in relation to cortical organization in individuals with CI. Given the paucity of such relevant findings, we suggest that the best guarantee of good language outcome after CI is the establishment of a secure first language pre-implant-however that may be achieved, and whatever the success of auditory restoration.
Campbell, Ruth; MacSweeney, Mairéad; Woll, Bencie
2014-01-01
Cochlear implantation (CI) for profound congenital hearing impairment, while often successful in restoring hearing to the deaf child, does not always result in effective speech processing. Exposure to non-auditory signals during the pre-implantation period is widely held to be responsible for such failures. Here, we question the inference that such exposure irreparably distorts the function of auditory cortex, negatively impacting the efficacy of CI. Animal studies suggest that in congenital early deafness there is a disconnection between (disordered) activation in primary auditory cortex (A1) and activation in secondary auditory cortex (A2). In humans, one factor contributing to this functional decoupling is assumed to be abnormal activation of A1 by visual projections—including exposure to sign language. In this paper we show that that this abnormal activation of A1 does not routinely occur, while A2 functions effectively supramodally and multimodally to deliver spoken language irrespective of hearing status. What, then, is responsible for poor outcomes for some individuals with CI and for apparent abnormalities in cortical organization in these people? Since infancy is a critical period for the acquisition of language, deaf children born to hearing parents are at risk of developing inefficient neural structures to support skilled language processing. A sign language, acquired by a deaf child as a first language in a signing environment, is cortically organized like a heard spoken language in terms of specialization of the dominant perisylvian system. However, very few deaf children are exposed to sign language in early infancy. Moreover, no studies to date have examined sign language proficiency in relation to cortical organization in individuals with CI. Given the paucity of such relevant findings, we suggest that the best guarantee of good language outcome after CI is the establishment of a secure first language pre-implant—however that may be achieved, and whatever the success of auditory restoration. PMID:25368567
Patel, Tirth R; Shahin, Antoine J; Bhat, Jyoti; Welling, D Bradley; Moberly, Aaron C
2016-10-01
We describe a novel use of cortical auditory evoked potentials in the preoperative workup to determine ear candidacy for cochlear implantation. A 71-year-old male was evaluated who had a long-deafened right ear, had never worn a hearing aid in that ear, and relied heavily on use of a left-sided hearing aid. Electroencephalographic testing was performed using free field auditory stimulation of each ear independently with pure tones at 1000 and 2000 Hz at approximately 10 dB above pure-tone thresholds for each frequency and for each ear. Mature cortical potentials were identified through auditory stimulation of the long-deafened ear. The patient underwent successful implantation of that ear. He experienced progressively improving aided pure-tone thresholds and binaural speech recognition benefit (AzBio score of 74%). Findings suggest that use of cortical auditory evoked potentials may serve a preoperative role in ear selection prior to cochlear implantation. © The Author(s) 2016.
Golding, Maryanne; Pearce, Wendy; Seymour, John; Cooper, Alison; Ching, Teresa; Dillon, Harvey
2007-02-01
Finding ways to evaluate the success of hearing aid fittings in young infants has increased in importance with the implementation of hearing screening programs. Cortical auditory evoked potentials (CAEP) can be recorded in infants and provides evidence for speech detection at the cortical level. The validity of this technique as a tool of hearing aid evaluation needs, however, to be demonstrated. The present study examined the relationship between the presence/absence of CAEPs to speech stimuli and the outcomes of a parental questionnaire in young infants who were fitted with hearing aids. The presence/absence of responses was determined by an experienced examiner as well as by a statistical measure, Hotelling's T(2). A statistically significant correlation between CAEPs and questionnaire scores was found using the examiner's grading (rs = 0.45) and using the statistical grading (rs = 0.41), and there was reasonably good agreement between traditional response detection methods and the statistical analysis.
Relating normalization to neuronal populations across cortical areas.
Ruff, Douglas A; Alberts, Joshua J; Cohen, Marlene R
2016-09-01
Normalization, which divisively scales neuronal responses to multiple stimuli, is thought to underlie many sensory, motor, and cognitive processes. In every study where it has been investigated, neurons measured in the same brain area under identical conditions exhibit a range of normalization, ranging from suppression by nonpreferred stimuli (strong normalization) to additive responses to combinations of stimuli (no normalization). Normalization has been hypothesized to arise from interactions between neuronal populations, either in the same or different brain areas, but current models of normalization are not mechanistic and focus on trial-averaged responses. To gain insight into the mechanisms underlying normalization, we examined interactions between neurons that exhibit different degrees of normalization. We recorded from multiple neurons in three cortical areas while rhesus monkeys viewed superimposed drifting gratings. We found that neurons showing strong normalization shared less trial-to-trial variability with other neurons in the same cortical area and more variability with neurons in other cortical areas than did units with weak normalization. Furthermore, the cortical organization of normalization was not random: neurons recorded on nearby electrodes tended to exhibit similar amounts of normalization. Together, our results suggest that normalization reflects a neuron's role in its local network and that modulatory factors like normalization share the topographic organization typical of sensory tuning properties. Copyright © 2016 the American Physiological Society.
Relating normalization to neuronal populations across cortical areas
Alberts, Joshua J.; Cohen, Marlene R.
2016-01-01
Normalization, which divisively scales neuronal responses to multiple stimuli, is thought to underlie many sensory, motor, and cognitive processes. In every study where it has been investigated, neurons measured in the same brain area under identical conditions exhibit a range of normalization, ranging from suppression by nonpreferred stimuli (strong normalization) to additive responses to combinations of stimuli (no normalization). Normalization has been hypothesized to arise from interactions between neuronal populations, either in the same or different brain areas, but current models of normalization are not mechanistic and focus on trial-averaged responses. To gain insight into the mechanisms underlying normalization, we examined interactions between neurons that exhibit different degrees of normalization. We recorded from multiple neurons in three cortical areas while rhesus monkeys viewed superimposed drifting gratings. We found that neurons showing strong normalization shared less trial-to-trial variability with other neurons in the same cortical area and more variability with neurons in other cortical areas than did units with weak normalization. Furthermore, the cortical organization of normalization was not random: neurons recorded on nearby electrodes tended to exhibit similar amounts of normalization. Together, our results suggest that normalization reflects a neuron's role in its local network and that modulatory factors like normalization share the topographic organization typical of sensory tuning properties. PMID:27358313
Brainstem timing: implications for cortical processing and literacy.
Banai, Karen; Nicol, Trent; Zecker, Steven G; Kraus, Nina
2005-10-26
The search for a unique biological marker of language-based learning disabilities has so far yielded inconclusive findings. Previous studies have shown a plethora of auditory processing deficits in learning disabilities at both the perceptual and physiological levels. In this study, we investigated the association among brainstem timing, cortical processing of stimulus differences, and literacy skills. To that end, brainstem timing and cortical sensitivity to acoustic change [mismatch negativity (MMN)] were measured in a group of children with learning disabilities and normal-learning children. The learning-disabled (LD) group was further divided into two subgroups with normal and abnormal brainstem timing. MMNs, literacy, and cognitive abilities were compared among the three groups. LD individuals with abnormal brainstem timing were more likely to show reduced processing of acoustic change at the cortical level compared with both normal-learning individuals and LD individuals with normal brainstem timing. This group was also characterized by a more severe form of learning disability manifested by poorer reading, listening comprehension, and general cognitive ability. We conclude that abnormal brainstem timing in learning disabilities is related to higher incidence of reduced cortical sensitivity to acoustic change and to deficient literacy skills. These findings suggest that abnormal brainstem timing may serve as a reliable marker of a subgroup of individuals with learning disabilities. They also suggest that faulty mechanisms of neural timing at the brainstem may be the biological basis of malfunction in this group.
Plyler, Erin; Harkrider, Ashley W
2013-01-01
A boy, aged 2 1/2 yr, experienced sudden deterioration of speech and language abilities. He saw multiple medical professionals across 2 yr. By almost 5 yr, his vocabulary diminished from 50 words to 4, and he was referred to our speech and hearing center. The purpose of this study was to heighten awareness of Landau-Kleffner syndrome (LKS) and emphasize the importance of an objective test battery that includes serial auditory-evoked potentials (AEPs) to audiologists who often are on the front lines of diagnosis and treatment delivery when faced with a child experiencing unexplained loss of the use of speech and language. Clinical report. Interview revealed a family history of seizure disorder. Normal social behaviors were observed. Acoustic reflexes and otoacoustic emissions were consistent with normal peripheral auditory function. The child could not complete behavioral audiometric testing or auditory processing tests, so serial AEPs were used to examine central nervous system function. Normal auditory brainstem responses, a replicable Na and absent Pa of the middle latency responses, and abnormal slow cortical potentials suggested dysfunction of auditory processing at the cortical level. The child was referred to a neurologist, who confirmed LKS. At age 7 1/2 yr, after 2 1/2 yr of antiepileptic medications, electroencephalographic (EEG) and audiometric measures normalized. Presently, the child communicates manually with limited use of oral information. Audiologists often are one of the first professionals to assess children with loss of speech and language of unknown origin. Objective, noninvasive, serial AEPs are a simple and valuable addition to the central audiometric test battery when evaluating a child with speech and language regression. The inclusion of these tests will markedly increase the chance for early and accurate referral, diagnosis, and monitoring of a child with LKS which is imperative for a positive prognosis. American Academy of Audiology.
"I know you can hear me": neural correlates of feigned hearing loss.
McPherson, Bradley; McMahon, Katie; Wilson, Wayne; Copland, David
2012-08-01
In the assessment of human hearing, it is often important to determine whether hearing loss is organic or nonorganic in nature. Nonorganic, or functional, hearing loss is often associated with deceptive intention on the part of the listener. Over the past decade, functional neuroimaging has been used to study the neural correlates of deception, and studies have consistently highlighted the contribution of the prefrontal cortex in such behaviors. Can patterns of brain activity be similarly used to detect when an individual is feigning a hearing loss? To answer this question, 15 adult participants were requested to respond to pure tones and simple words correctly, incorrectly, randomly, or with the intent to feign a hearing loss. As predicted, more activity was observed in the prefrontal cortices (as measured by functional magnetic resonance imaging), and delayed behavioral reaction times were noted, when the participants feigned a hearing loss or responded randomly versus when they responded correctly or incorrectly. The results suggest that cortical imaging techniques could play a role in identifying individuals who are feigning hearing loss. Copyright © 2011 Wiley Periodicals, Inc.
Werfel, Krystal L
2017-10-05
The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.
Role of Visual Speech in Phonological Processing by Children with Hearing Loss
ERIC Educational Resources Information Center
Jerger, Susan; Tye-Murray, Nancy; Abdi, Herve
2009-01-01
Purpose: This research assessed the influence of visual speech on phonological processing by children with hearing loss (HL). Method: Children with HL and children with normal hearing (NH) named pictures while attempting to ignore auditory or audiovisual speech distractors whose onsets relative to the pictures were either congruent, conflicting in…
Aronoff, Justin M.; Freed, Daniel J.; Fisher, Laurel M.; Pal, Ivan; Soli, Sigfrid D.
2011-01-01
Objectives Cochlear implant microphones differ in placement, frequency response, and other characteristics such as whether they are directional. Although normal hearing individuals are often used as controls in studies examining cochlear implant users’ binaural benefits, the considerable differences across cochlear implant microphones make such comparisons potentially misleading. The goal of this study was to examine binaural benefits for speech perception in noise for normal hearing individuals using stimuli processed by head-related transfer functions (HRTFs) based on the different cochlear implant microphones. Design HRTFs were created for different cochlear implant microphones and used to test participants on the Hearing in Noise Test. Experiment 1 tested cochlear implant users and normal hearing individuals with HRTF-processed stimuli and with sound field testing to determine whether the HRTFs adequately simulated sound field testing. Experiment 2 determined the measurement error and performance-intensity function for the Hearing in Noise Test with normal hearing individuals listening to stimuli processed with the various HRTFs. Experiment 3 compared normal hearing listeners’ performance across HRTFs to determine how the HRTFs affected performance. Experiment 4 evaluated binaural benefits for normal hearing listeners using the various HRTFs, including ones that were modified to investigate the contributions of interaural time and level cues. Results The results indicated that the HRTFs adequately simulated sound field testing for the Hearing in Noise Test. They also demonstrated that the test-retest reliability and performance-intensity function were consistent across HRTFs, and that the measurement error for the test was 1.3 dB, with a change in signal-to-noise ratio of 1 dB reflecting a 10% change in intelligibility. There were significant differences in performance when using the various HRTFs, with particularly good thresholds for the HRTF based on the directional microphone when the speech and masker were spatially separated, emphasizing the importance of measuring binaural benefits separately for each HRTF. Evaluation of binaural benefits indicated that binaural squelch and spatial release from masking were found for all HRTFs and binaural summation was found for all but one HRTF, although binaural summation was less robust than the other types of binaural benefits. Additionally, the results indicated that neither interaural time nor level cues dominated binaural benefits for the normal hearing participants. Conclusions This study provides a means to measure the degree to which cochlear implant microphones affect acoustic hearing with respect to speech perception in noise. It also provides measures that can be used to evaluate the independent contributions of interaural time and level cues. These measures provide tools that can aid researchers in understanding and improving binaural benefits in acoustic hearing individuals listening via cochlear implant microphones. PMID:21412155
Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret
2012-01-01
Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance. PMID:26557339
Neuro-rehabilitation Approach for Sudden Sensorineural Hearing Loss
Sekiya, Kenichi; Fukushima, Munehisa; Teismann, Henning; Lagemann, Lothar; Kakigi, Ryusuke; Pantev, Christo; Okamoto, Hidehiko
2016-01-01
Sudden sensorineural hearing loss (SSHL) is characterized by acute, idiopathic hearing loss. The estimated incidence rate is 5-30 cases per 100,000 people per year. The causes of SSHL and the mechanisms underlying SSHL currently remain unknown. Based on several hypotheses such as a circulatory disturbance to the cochlea, viral infection, and autoimmune disease, pharmaco-therapeutic approaches have been applied to treat SSHL patients; however, the efficacy of the standard treatment, corticosteroid therapy, is still under debate. Exposure to intense sounds has been shown to cause permanent damage to the auditory system; however, exposure to a moderate level enriched acoustic environment after noise trauma may reduce hearing impairments. Several neuroimaging studies recently suggested that the onset of SSHL induced maladaptive cortical reorganization in the human auditory cortex, and that the degree of cortical reorganization in the acute SSHL phase negatively correlated with the recovery rate from hearing loss. This article reports the development of a novel neuro-rehabilitation approach for SSHL, "constraint-induced sound therapy (CIST)". The aim of the CIST protocol is to prevent or reduce maladaptive cortical reorganization by using an enriched acoustic environment. The canal of the intact ear of SSHL patients is plugged in order to motivate them to actively use the affected ear and thereby prevent progress of maladaptive cortical reorganization. The affected ear is also exposed to music via a headphone for 6 hr per day during hospitalization. The CIST protocol appears to be a safe, easy, inexpensive, and effective treatment for SSHL. PMID:26863274
Effect of conductive hearing loss on central auditory function.
Bayat, Arash; Farhadi, Mohammad; Emamdjomeh, Hesam; Saki, Nader; Mirmomeni, Golshan; Rahim, Fakher
It has been demonstrated that long-term Conductive Hearing Loss (CHL) may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP). It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control), aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN) test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p=0.004; left: p<0.001). Individuals with CHL had significantly lower correct responses than individuals with normal hearing for both sides (p<0.001). No correlation was found between GIN performance and degree of hearing loss in either group (p>0.05). The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Evaluation of Extended-Wear Hearing Aid Technology for Operational Military Use
2016-07-01
listeners without degrading auditory situational awareness. To this point, significant progress has been made in this evaluation process. The devices...provide long-term hearing protection for listeners with normal hearing with minimal impact on auditory situational awareness and minimal annoyance due to...Test Plan: A comprehensive test plan is complete for the measurements at AFRL, which will incorporate goals 1-2 and 4-5 above using a normal
Xia, Shuang; Song, TianBin; Che, Jing; Li, Qiang; Chai, Chao; Zheng, Meizhu; Shen, Wen
2017-01-01
Early hearing deprivation could affect the development of auditory, language, and vision ability. Insufficient or no stimulation of the auditory cortex during the sensitive periods of plasticity could affect the function of hearing, language, and vision development. Twenty-three infants with congenital severe sensorineural hearing loss (CSSHL) and 17 age and sex matched normal hearing subjects were recruited. The amplitude of low frequency fluctuations (ALFF) and regional homogeneity (ReHo) of the auditory, language, and vision related brain areas were compared between deaf infants and normal subjects. Compared with normal hearing subjects, decreased ALFF and ReHo were observed in auditory and language-related cortex. Increased ALFF and ReHo were observed in vision related cortex, which suggest that hearing and language function were impaired and vision function was enhanced due to the loss of hearing. ALFF of left Brodmann area 45 (BA45) was negatively correlated with deaf duration in infants with CSSHL. ALFF of right BA39 was positively correlated with deaf duration in infants with CSSHL. In conclusion, ALFF and ReHo can reflect the abnormal brain function in language, auditory, and visual information processing in infants with CSSHL. This demonstrates that the development of auditory, language, and vision processing function has been affected by congenital severe sensorineural hearing loss before 4 years of age.
Carroll, Rebecca; Uslar, Verena; Brand, Thomas; Ruigendijk, Esther
The authors aimed to determine whether hearing impairment affects sentence comprehension beyond phoneme or word recognition (i.e., on the sentence level), and to distinguish grammatically induced processing difficulties in structurally complex sentences from perceptual difficulties associated with listening to degraded speech. Effects of hearing impairment or speech in noise were expected to reflect hearer-specific speech recognition difficulties. Any additional processing time caused by the sustained perceptual challenges across the sentence may either be independent of or interact with top-down processing mechanisms associated with grammatical sentence structure. Forty-nine participants listened to canonical subject-initial or noncanonical object-initial sentences that were presented either in quiet or in noise. Twenty-four participants had mild-to-moderate hearing impairment and received hearing-loss-specific amplification. Twenty-five participants were age-matched peers with normal hearing status. Reaction times were measured on-line at syntactically critical processing points as well as two control points to capture differences in processing mechanisms. An off-line comprehension task served as an additional indicator of sentence (mis)interpretation, and enforced syntactic processing. The authors found general effects of hearing impairment and speech in noise that negatively affected perceptual processing, and an effect of word order, where complex grammar locally caused processing difficulties for the noncanonical sentence structure. Listeners with hearing impairment were hardly affected by noise at the beginning of the sentence, but were affected markedly toward the end of the sentence, indicating a sustained perceptual effect of speech recognition. Comprehension of sentences with noncanonical word order was negatively affected by degraded signals even after sentence presentation. Hearing impairment adds perceptual processing load during sentence processing, but affects grammatical processing beyond the word level to the same degree as in normal hearing, with minor differences in processing mechanisms. The data contribute to our understanding of individual differences in speech perception and language understanding. The authors interpret their results within the ease of language understanding model.
Twomey, Tae; Waters, Dafydd; Price, Cathy J; Evans, Samuel; MacSweeney, Mairéad
2017-09-27
To investigate how hearing status, sign language experience, and task demands influence functional responses in the human superior temporal cortices (STC) we collected fMRI data from deaf and hearing participants (male and female), who either acquired sign language early or late in life. Our stimuli in all tasks were pictures of objects. We varied the linguistic and visuospatial processing demands in three different tasks that involved decisions about (1) the sublexical (phonological) structure of the British Sign Language (BSL) signs for the objects, (2) the semantic category of the objects, and (3) the physical features of the objects.Neuroimaging data revealed that in participants who were deaf from birth, STC showed increased activation during visual processing tasks. Importantly, this differed across hemispheres. Right STC was consistently activated regardless of the task whereas left STC was sensitive to task demands. Significant activation was detected in the left STC only for the BSL phonological task. This task, we argue, placed greater demands on visuospatial processing than the other two tasks. In hearing signers, enhanced activation was absent in both left and right STC during all three tasks. Lateralization analyses demonstrated that the effect of deafness was more task-dependent in the left than the right STC whereas it was more task-independent in the right than the left STC. These findings indicate how the absence of auditory input from birth leads to dissociable and altered functions of left and right STC in deaf participants. SIGNIFICANCE STATEMENT Those born deaf can offer unique insights into neuroplasticity, in particular in regions of superior temporal cortex (STC) that primarily respond to auditory input in hearing people. Here we demonstrate that in those deaf from birth the left and the right STC have altered and dissociable functions. The right STC was activated regardless of demands on visual processing. In contrast, the left STC was sensitive to the demands of visuospatial processing. Furthermore, hearing signers, with the same sign language experience as the deaf participants, did not activate the STCs. Our data advance current understanding of neural plasticity by determining the differential effects that hearing status and task demands can have on left and right STC function. Copyright © 2017 Twomey et al.
Smith, Sherri L; Pichora-Fuller, M Kathleen; Alexander, Genevieve
The purpose of this study was to develop the Word Auditory Recognition and Recall Measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by ONH, and worst for OHL listeners. WARRM recall scores were significantly correlated with other memory measures. In addition, WARRM recall scores were correlated with results on the Words-In-Noise (WIN) test for the OHL listeners in the no processing condition and for ONH listeners in the alphabet processing condition. Differences in the WIN and recall scores of these groups are consistent with the interpretation that the OHL listeners found listening to be sufficiently demanding to affect recall even in the no processing condition, whereas the ONH group listeners did not find it so demanding until the additional alphabet processing task was added. These findings demonstrate the feasibility of incorporating an auditory memory test into a word-recognition test to obtain measures of both word recognition and working memory simultaneously. The correlation of WARRM recall with scores from other memory measures is evidence of construct validity. The observation of correlations between the WIN thresholds with each of the older groups and recall scores in certain processing conditions suggests that recall depends on listeners' word-recognition abilities in noise in combination with the processing demands of the task. The recall score provides additional information beyond the pure-tone audiogram and word-recognition scores that may help rehabilitative audiologists assess the listening abilities of patients with hearing loss.
Ozmeral, Erol J; Eddins, David A; Eddins, Ann C
2016-12-01
Previous electrophysiological studies of interaural time difference (ITD) processing have demonstrated that ITDs are represented by a nontopographic population rate code. Rather than narrow tuning to ITDs, neural channels have broad tuning to ITDs in either the left or right auditory hemifield, and the relative activity between the channels determines the perceived lateralization of the sound. With advancing age, spatial perception weakens and poor temporal processing contributes to declining spatial acuity. At present, it is unclear whether age-related temporal processing deficits are due to poor inhibitory controls in the auditory system or degraded neural synchrony at the periphery. Cortical processing of spatial cues based on a hemifield code are susceptible to potential age-related physiological changes. We consider two distinct predictions of age-related changes to ITD sensitivity: declines in inhibitory mechanisms would lead to increased excitation and medial shifts to rate-azimuth functions, whereas a general reduction in neural synchrony would lead to reduced excitation and shallower slopes in the rate-azimuth function. The current study tested these possibilities by measuring an evoked response to ITD shifts in a narrow-band noise. Results were more in line with the latter outcome, both from measured latencies and amplitudes of the global field potentials and source-localized waveforms in the left and right auditory cortices. The measured responses for older listeners also tended to have reduced asymmetric distribution of activity in response to ITD shifts, which is consistent with other sensory and cognitive processing models of aging. Copyright © 2016 the American Physiological Society.
Auditory and tactile gap discrimination by observers with normal and impaired hearing.
Desloge, Joseph G; Reed, Charlotte M; Braida, Louis D; Perez, Zachary D; Delhorne, Lorraine A; Villabona, Timothy J
2014-02-01
Temporal processing ability for the senses of hearing and touch was examined through the measurement of gap-duration discrimination thresholds (GDDTs) employing the same low-frequency sinusoidal stimuli in both modalities. GDDTs were measured in three groups of observers (normal-hearing, hearing-impaired, and normal-hearing with simulated hearing loss) covering an age range of 21-69 yr. GDDTs for a baseline gap of 6 ms were measured for four different combinations of 100-ms leading and trailing markers (250-250, 250-400, 400-250, and 400-400 Hz). Auditory measurements were obtained for monaural presentation over headphones and tactile measurements were obtained using sinusoidal vibrations presented to the left middle finger. The auditory GDDTs of the hearing-impaired listeners, which were larger than those of the normal-hearing observers, were well-reproduced in the listeners with simulated loss. The magnitude of the GDDT was generally independent of modality and showed effects of age in both modalities. The use of different-frequency compared to same-frequency markers led to a greater deterioration in auditory GDDTs compared to tactile GDDTs and may reflect differences in bandwidth properties between the two sensory systems.
Punch, Simone; Van Dun, Bram; King, Alison; Carter, Lyndal; Pearce, Wendy
2016-01-01
This article presents the clinical protocol that is currently being used within Australian Hearing for infant hearing aid evaluation using cortical auditory evoked potentials (CAEPs). CAEP testing is performed in the free field at two stimulus levels (65 dB sound pressure level [SPL], followed by 55 or 75 dB SPL) using three brief frequency-distinct speech sounds /m/, /ɡ/, and /t/, within a standard audiological appointment of up to 90 minutes. CAEP results are used to check or guide modifications of hearing aid fittings or to confirm unaided hearing capability. A retrospective review of 83 client files evaluated whether clinical practice aligned with the clinical protocol. It showed that most children could be assessed as part of their initial fitting program when they were identified as a priority for CAEP testing. Aided CAEPs were most commonly assessed within 8 weeks of the fitting. A survey of 32 pediatric audiologists provided information about their perception of cortical testing at Australian Hearing. The results indicated that clinical CAEP testing influenced audiologists' approach to rehabilitation and was well received by parents and that they were satisfied with the technique. Three case studies were selected to illustrate how CAEP testing can be used in a clinical environment. Overall, CAEP testing has been effectively integrated into the infant fitting program. PMID:27587921
Fogerty, Daniel; Ahlstrom, Jayne B.; Bologna, William J.; Dubno, Judy R.
2015-01-01
This study investigated how single-talker modulated noise impacts consonant and vowel cues to sentence intelligibility. Younger normal-hearing, older normal-hearing, and older hearing-impaired listeners completed speech recognition tests. All listeners received spectrally shaped speech matched to their individual audiometric thresholds to ensure sufficient audibility with the exception of a second younger listener group who received spectral shaping that matched the mean audiogram of the hearing-impaired listeners. Results demonstrated minimal declines in intelligibility for older listeners with normal hearing and more evident declines for older hearing-impaired listeners, possibly related to impaired temporal processing. A correlational analysis suggests a common underlying ability to process information during vowels that is predictive of speech-in-modulated noise abilities. Whereas, the ability to use consonant cues appears specific to the particular characteristics of the noise and interruption. Performance declines for older listeners were mostly confined to consonant conditions. Spectral shaping accounted for the primary contributions of audibility. However, comparison with the young spectral controls who received identical spectral shaping suggests that this procedure may reduce wideband temporal modulation cues due to frequency-specific amplification that affected high-frequency consonants more than low-frequency vowels. These spectral changes may impact speech intelligibility in certain modulation masking conditions. PMID:26093436
Felix, Richard A; Portfors, Christine V
2007-06-01
Individuals with age-related hearing loss often have difficulty understanding complex sounds such as basic speech. The C57BL/6 mouse suffers from progressive sensorineural hearing loss and thus is an effective tool for dissecting the neural mechanisms underlying changes in complex sound processing observed in humans. Neural mechanisms important for processing complex sounds include multiple tuning and combination sensitivity, and these responses are common in the inferior colliculus (IC) of normal hearing mice. We examined neural responses in the IC of C57Bl/6 mice to single and combinations of tones to examine the extent of spectral integration in the IC after age-related high frequency hearing loss. Ten percent of the neurons were tuned to multiple frequency bands and an additional 10% displayed non-linear facilitation to the combination of two different tones (combination sensitivity). No combination-sensitive inhibition was observed. By comparing these findings to spectral integration properties in the IC of normal hearing CBA/CaJ mice, we suggest that high frequency hearing loss affects some of the neural mechanisms in the IC that underlie the processing of complex sounds. The loss of spectral integration properties in the IC during aging likely impairs the central auditory system's ability to process complex sounds such as speech.
Cortical signal-in-noise coding varies by noise type, signal-to-noise ratio, age, and hearing status
Maamor, Nashrah; Billings, Curtis J.
2017-01-01
The purpose of this study was to determine the effects of noise type, signal-to-noise ratio (SNR), age, and hearing status on cortical auditory evoked potentials (CAEPs) to speech sounds. This helps to explain the hearing-in-noise difficulties often seen in the aging and hearing impaired population. Continuous, modulated, and babble noise types were presented at varying SNRs to 30 individuals divided into three groups according to age and hearing status. Significant main effects of noise type, SNR, and group were found. Interaction effects revealed that the SNR effect varies as a function of noise type and is most systematic for continuous noise. Effects of age and hearing loss were limited to CAEP latency and were differentially modulated by energetic and informational-like masking. It is clear that the spectrotemporal characteristics of signals and noises play an important role in determining the morphology of neural responses. Participant factors such as age and hearing status, also play an important role in determining the brain’s response to complex auditory stimuli and contribute to the ability to listen in noise. PMID:27838448
Ching, Teresa Y. C.; Zhang, Vicky W.; Hou, Sanna; Van Buynder, Patricia
2016-01-01
Hearing loss in children is detected soon after birth via newborn hearing screening. Procedures for early hearing assessment and hearing aid fitting are well established, but methods for evaluating the effectiveness of amplification for young children are limited. One promising approach to validating hearing aid fittings is to measure cortical auditory evoked potentials (CAEPs). This article provides first a brief overview of reports on the use of CAEPs for evaluation of hearing aids. Second, a study that measured CAEPs to evaluate nonlinear frequency compression (NLFC) in hearing aids for 27 children (between 6.1 and 16.8 years old) who have mild to severe hearing loss is reported. There was no significant difference in aided sensation level or the detection of CAEPs for /g/ between NLFC on and off conditions. The activation of NLFC was associated with a significant increase in aided sensation levels for /t/ and /s/. It also was associated with an increase in detection of CAEPs for /t/ and /s/. The findings support the use of CAEPs for checking audibility provided by hearing aids. Based on the current data, a clinical protocol for using CAEPs to validate audibility with amplification is presented. PMID:27587920
Hearing shapes our perception of time: temporal discrimination of tactile stimuli in deaf people.
Bolognini, Nadia; Cecchetto, Carlo; Geraci, Carlo; Maravita, Angelo; Pascual-Leone, Alvaro; Papagno, Costanza
2012-02-01
Confronted with the loss of one type of sensory input, we compensate using information conveyed by other senses. However, losing one type of sensory information at specific developmental times may lead to deficits across all sensory modalities. We addressed the effect of auditory deprivation on the development of tactile abilities, taking into account changes occurring at the behavioral and cortical level. Congenitally deaf and hearing individuals performed two tactile tasks, the first requiring the discrimination of the temporal duration of touches and the second requiring the discrimination of their spatial length. Compared with hearing individuals, deaf individuals were impaired only in tactile temporal processing. To explore the neural substrate of this difference, we ran a TMS experiment. In deaf individuals, the auditory association cortex was involved in temporal and spatial tactile processing, with the same chronometry as the primary somatosensory cortex. In hearing participants, the involvement of auditory association cortex occurred at a later stage and selectively for temporal discrimination. The different chronometry in the recruitment of the auditory cortex in deaf individuals correlated with the tactile temporal impairment. Thus, early hearing experience seems to be crucial to develop an efficient temporal processing across modalities, suggesting that plasticity does not necessarily result in behavioral compensation.
Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss
ERIC Educational Resources Information Center
Koravand, Amineh; Jutras, Benoit
2013-01-01
Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…
Gfeller, Kate; Christ, Aaron; Knutson, John; Witt, Shelley; Mehr, Maureen
2003-01-01
The purposes of this study were (a) to develop a test of complex song appraisal that would be suitable for use with adults who use a cochlear implant (assistive hearing device) and (b) to compare the appraisal ratings (liking) of complex songs by adults who use cochlear implants (n = 66) with a comparison group of adults with normal hearing (n = 36). The article describes the development of a computerized test for appraisal, with emphasis on its theoretical basis and the process for item selection of naturalistic stimuli. The appraisal test was administered to the 2 groups to determine the effects of prior song familiarity and subjective complexity on complex song appraisal. Comparison of the 2 groups indicates that the implant users rate 2 of 3 musical genres (country western, pop) as significantly more complex than do normal hearing adults, and give significantly less positive ratings to classical music than do normal hearing adults. Appraisal responses of implant recipients were examined in relation to hearing history, age, performance on speech perception and cognitive tests, and musical background.
Effects of a cochlear implant simulation on immediate memory in normal-hearing adults
Burkholder, Rose A.; Pisoni, David B.; Svirsky, Mario A.
2012-01-01
This study assessed the effects of stimulus misidentification and memory processing errors on immediate memory span in 25 normal-hearing adults exposed to degraded auditory input simulating signals provided by a cochlear implant. The identification accuracy of degraded digits in isolation was measured before digit span testing. Forward and backward digit spans were shorter when digits were degraded than when they were normal. Participants’ normal digit spans and their accuracy in identifying isolated digits were used to predict digit spans in the degraded speech condition. The observed digit spans in degraded conditions did not differ significantly from predicted digit spans. This suggests that the decrease in memory span is related primarily to misidentification of digits rather than memory processing errors related to cognitive load. These findings provide complementary information to earlier research on auditory memory span of listeners exposed to degraded speech either experimentally or as a consequence of a hearing-impairment. PMID:16317807
Zekveld, Adriana A; Kramer, Sophia E; Festen, Joost M
2011-01-01
The aim of the present study was to evaluate the influence of age, hearing loss, and cognitive ability on the cognitive processing load during listening to speech presented in noise. Cognitive load was assessed by means of pupillometry (i.e., examination of pupil dilation), supplemented with subjective ratings. Two groups of subjects participated: 38 middle-aged participants (mean age = 55 yrs) with normal hearing and 36 middle-aged participants (mean age = 61 yrs) with hearing loss. Using three Speech Reception Threshold (SRT) in stationary noise tests, we estimated the speech-to-noise ratios (SNRs) required for the correct repetition of 50%, 71%, or 84% of the sentences (SRT50%, SRT71%, and SRT84%, respectively). We examined the pupil response during listening: the peak amplitude, the peak latency, the mean dilation, and the pupil response duration. For each condition, participants rated the experienced listening effort and estimated their performance level. Participants also performed the Text Reception Threshold (TRT) test, a test of processing speed, and a word vocabulary test. Data were compared with previously published data from young participants with normal hearing. Hearing loss was related to relatively poor SRTs, and higher speech intelligibility was associated with lower effort and higher performance ratings. For listeners with normal hearing, increasing age was associated with poorer TRTs and slower processing speed but with larger word vocabulary. A multivariate repeated-measures analysis of variance indicated main effects of group and SNR and an interaction effect between these factors on the pupil response. The peak latency was relatively short and the mean dilation was relatively small at low intelligibility levels for the middle-aged groups, whereas the reverse was observed for high intelligibility levels. The decrease in the pupil response as a function of increasing SNR was relatively small for the listeners with hearing loss. Spearman correlation coefficients indicated that the cognitive load was larger in listeners with better TRT performances as reflected by a longer peak latency (normal-hearing participants, SRT50% condition) and a larger peak amplitude and longer response duration (hearing-impaired participants, SRT50% and SRT84% conditions). Also, a larger word vocabulary was related to longer response duration in the SRT84% condition for the participants with normal hearing. The pupil response systematically increased with decreasing speech intelligibility. Ageing and hearing loss were related to less release from effort when increasing the intelligibility of speech in noise. In difficult listening conditions, these factors may induce cognitive overload relatively early or they may be associated with relatively shallow speech processing. More research is needed to elucidate the underlying mechanisms explaining these results. Better TRTs and larger word vocabulary were related to higher mental processing load across speech intelligibility levels. This indicates that utilizing linguistic ability to improve speech perception is associated with increased listening load.
Consequences of Early Conductive Hearing Loss on Long-Term Binaural Processing.
Graydon, Kelley; Rance, Gary; Dowell, Richard; Van Dun, Bram
The aim of the study was to investigate the long-term effects of early conductive hearing loss on binaural processing in school-age children. One hundred and eighteen children participated in the study, 82 children with a documented history of conductive hearing loss associated with otitis media and 36 controls who had documented histories showing no evidence of otitis media or conductive hearing loss. All children were demonstrated to have normal-hearing acuity and middle ear function at the time of assessment. The Listening in Spatialized Noise Sentence (LiSN-S) task and the masking level difference (MLD) task were used as the two different measures of binaural interaction ability. Children with a history of conductive hearing loss performed significantly poorer than controls on all LiSN-S conditions relying on binaural cues (DV90, p = <0.001 and SV90, p = 0.003). No significant difference was found between the groups in listening conditions without binaural cues. Fifteen children with a conductive hearing loss history (18%) showed results consistent with a spatial processing disorder. No significant difference was observed between the conductive hearing loss group and the controls on the MLD task. Furthermore, no correlations were found between LiSN-S and MLD. Results show a relationship between early conductive hearing loss and listening deficits that persist once hearing has returned to normal. Results also suggest that the two binaural interaction tasks (LiSN-S and MLD) may be measuring binaural processing at different levels. Findings highlight the need for a screening measure of functional listening ability in children with a history of early otitis media.
Recognition and production of emotions in children with cochlear implants.
Mildner, Vesna; Koska, Tena
2014-01-01
The aim of this study was to examine auditory recognition and vocal production of emotions in three prelingually bilaterally profoundly deaf children aged 6-7 who received cochlear implants before age 2, and compare them with age-matched normally hearing children. No consistent advantage was found for the normally hearing participants. In both groups, sadness was recognized best and disgust was the most difficult. Confusion matrices among other emotions (anger, happiness, and fear) showed that children with and without hearing impairment may rely on different cues. Both groups of children showed that perception is superior to production. Normally hearing children were more successful in the production of sadness, happiness, and fear, but not anger or disgust. The data set is too small to draw any definite conclusions, but it seems that a combination of early implantation and regular auditory-oral-based therapy enables children with cochlear implants to process and produce emotional content comparable with children with normal hearing.
Ricketts, Todd A; Dittberner, Andrew B; Johnson, Earl E
2008-02-01
One factor that has been shown to greatly affect sound quality is audible bandwidth. Provision of gain for frequencies above 4-6 kHz has not generally been supported for groups of hearing aid wearers. The purpose of this study was to determine if preference for bandwidth extension in hearing aid processed sounds was related to the magnitude of hearing loss in individual listeners. Ten participants with normal hearing and 20 participants with mild-to-moderate hearing loss completed the study. Signals were processed using hearing aid-style compression algorithms and filtered using two cutoff frequencies, 5.5 and 9 kHz, which were selected to represent bandwidths that are achievable in modern hearing aids. Round-robin paired comparisons based on the criteria of preferred sound quality were made for 2 different monaurally presented brief sound segments, including music and a movie. Results revealed that preference for either the wider or narrower bandwidth (9- or 5.5-kHz cutoff frequency, respectively) was correlated with the slope of hearing loss from 4 to 12 kHz, with steep threshold slopes associated with preference for narrower bandwidths. Consistent preference for wider bandwidth is present in some listeners with mild-to-moderate hearing loss.
Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin
2017-07-01
Restoring normal hearing requires knowledge of how peripheral and central auditory processes are affected by hearing loss. Previous research has focussed primarily on peripheral changes following sensorineural hearing loss, whereas consequences for central auditory processing have received less attention. We examined the ability of hearing-impaired children to direct auditory attention to a voice of interest (based on the talker's spatial location or gender) in the presence of a common form of background noise: the voices of competing talkers (i.e. during multi-talker, or "Cocktail Party" listening). We measured brain activity using electro-encephalography (EEG) when children prepared to direct attention to the spatial location or gender of an upcoming target talker who spoke in a mixture of three talkers. Compared to normally-hearing children, hearing-impaired children showed significantly less evidence of preparatory brain activity when required to direct spatial attention. This finding is consistent with the idea that hearing-impaired children have a reduced ability to prepare spatial attention for an upcoming talker. Moreover, preparatory brain activity was not restored when hearing-impaired children listened with their acoustic hearing aids. An implication of these findings is that steps to improve auditory attention alongside acoustic hearing aids may be required to improve the ability of hearing-impaired children to understand speech in the presence of competing talkers. Copyright © 2017 Elsevier B.V. All rights reserved.
Source Space Estimation of Oscillatory Power and Brain Connectivity in Tinnitus
Zobay, Oliver; Palmer, Alan R.; Hall, Deborah A.; Sereda, Magdalena; Adjamian, Peyman
2015-01-01
Tinnitus is the perception of an internally generated sound that is postulated to emerge as a result of structural and functional changes in the brain. However, the precise pathophysiology of tinnitus remains unknown. Llinas’ thalamocortical dysrhythmia model suggests that neural deafferentation due to hearing loss causes a dysregulation of coherent activity between thalamus and auditory cortex. This leads to a pathological coupling of theta and gamma oscillatory activity in the resting state, localised to the auditory cortex where normally alpha oscillations should occur. Numerous studies also suggest that tinnitus perception relies on the interplay between auditory and non-auditory brain areas. According to the Global Brain Model, a network of global fronto—parietal—cingulate areas is important in the generation and maintenance of the conscious perception of tinnitus. Thus, the distress experienced by many individuals with tinnitus is related to the top—down influence of this global network on auditory areas. In this magnetoencephalographic study, we compare resting-state oscillatory activity of tinnitus participants and normal-hearing controls to examine effects on spectral power as well as functional and effective connectivity. The analysis is based on beamformer source projection and an atlas-based region-of-interest approach. We find increased functional connectivity within the auditory cortices in the alpha band. A significant increase is also found for the effective connectivity from a global brain network to the auditory cortices in the alpha and beta bands. We do not find evidence of effects on spectral power. Overall, our results provide only limited support for the thalamocortical dysrhythmia and Global Brain models of tinnitus. PMID:25799178
Integrating Information from Different Senses in the Auditory Cortex
King, Andrew J.; Walker, Kerry M.M.
2015-01-01
Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies. PMID:22798035
A Circuit for Motor Cortical Modulation of Auditory Cortical Activity
Nelson, Anders; Schneider, David M.; Takatoh, Jun; Sakurai, Katsuyasu; Wang, Fan
2013-01-01
Normal hearing depends on the ability to distinguish self-generated sounds from other sounds, and this ability is thought to involve neural circuits that convey copies of motor command signals to various levels of the auditory system. Although such interactions at the cortical level are believed to facilitate auditory comprehension during movements and drive auditory hallucinations in pathological states, the synaptic organization and function of circuitry linking the motor and auditory cortices remain unclear. Here we describe experiments in the mouse that characterize circuitry well suited to transmit motor-related signals to the auditory cortex. Using retrograde viral tracing, we established that neurons in superficial and deep layers of the medial agranular motor cortex (M2) project directly to the auditory cortex and that the axons of some of these deep-layer cells also target brainstem motor regions. Using in vitro whole-cell physiology, optogenetics, and pharmacology, we determined that M2 axons make excitatory synapses in the auditory cortex but exert a primarily suppressive effect on auditory cortical neuron activity mediated in part by feedforward inhibition involving parvalbumin-positive interneurons. Using in vivo intracellular physiology, optogenetics, and sound playback, we also found that directly activating M2 axon terminals in the auditory cortex suppresses spontaneous and stimulus-evoked synaptic activity in auditory cortical neurons and that this effect depends on the relative timing of motor cortical activity and auditory stimulation. These experiments delineate the structural and functional properties of a corticocortical circuit that could enable movement-related suppression of auditory cortical activity. PMID:24005287
Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf
Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao
2016-01-01
Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461
Schoof, Tim; Rosen, Stuart
2014-01-01
Normal-hearing older adults often experience increased difficulties understanding speech in noise. In addition, they benefit less from amplitude fluctuations in the masker. These difficulties may be attributed to an age-related auditory temporal processing deficit. However, a decline in cognitive processing likely also plays an important role. This study examined the relative contribution of declines in both auditory and cognitive processing to the speech in noise performance in older adults. Participants included older (60–72 years) and younger (19–29 years) adults with normal hearing. Speech reception thresholds (SRTs) were measured for sentences in steady-state speech-shaped noise (SS), 10-Hz sinusoidally amplitude-modulated speech-shaped noise (AM), and two-talker babble. In addition, auditory temporal processing abilities were assessed by measuring thresholds for gap, amplitude-modulation, and frequency-modulation detection. Measures of processing speed, attention, working memory, Text Reception Threshold (a visual analog of the SRT), and reading ability were also obtained. Of primary interest was the extent to which the various measures correlate with listeners' abilities to perceive speech in noise. SRTs were significantly worse for older adults in the presence of two-talker babble but not SS and AM noise. In addition, older adults showed some cognitive processing declines (working memory and processing speed) although no declines in auditory temporal processing. However, working memory and processing speed did not correlate significantly with SRTs in babble. Despite declines in cognitive processing, normal-hearing older adults do not necessarily have problems understanding speech in noise as SRTs in SS and AM noise did not differ significantly between the two groups. Moreover, while older adults had higher SRTs in two-talker babble, this could not be explained by age-related cognitive declines in working memory or processing speed. PMID:25429266
NASA Technical Reports Server (NTRS)
Drury, H. A.; Van Essen, D. C.
1997-01-01
We used surface-based representations to analyze functional specializations in the human cerebral cortex. A computerized reconstruction of the cortical surface of the Visible Man digital atlas was generated and transformed to the Talairach coordinate system. This surface was also flattened and used to establish a surface-based coordinate system that respects the topology of the cortical sheet. The linkage between two-dimensional and three-dimensional representations allows the locations of published neuroimaging activation foci to be stereotaxically projected onto the Visible Man cortical flat map. An analysis of two activation studies related to the hearing and reading of music and of words illustrates how this approach permits the systematic estimation of the degree of functional segregation and of potential functional overlap for different aspects of sensory processing.
Coelho, Ana Cristina; Brasolotto, Alcione Ghedini; Bevilacqua, Maria Cecília
2015-06-01
To compare some perceptual and acoustic characteristics of the voices of children who use the advanced combination encoder (ACE) or fine structure processing (FSP) speech coding strategies, and to investigate whether these characteristics differ from children with normal hearing. Acoustic analysis of the sustained vowel /a/ was performed using the multi-dimensional voice program (MDVP). Analyses of sequential and spontaneous speech were performed using the real time pitch. Perceptual analyses of these samples were performed using visual-analogic scales of pre-selected parameters. Seventy-six children from three years to five years and 11 months of age participated. Twenty-eight were users of ACE, 23 were users of FSP, and 25 were children with normal hearing. Although both groups with CI presented with some deviated vocal features, the users of ACE presented with voice quality more like children with normal hearing than the users of FSP. Sound processing of ACE appeared to provide better conditions for auditory monitoring of the voice, and consequently, for better control of the voice production. However, these findings need to be further investigated due to the lack of comparative studies published to understand exactly which attributes of sound processing are responsible for differences in performance.
Li, Wenjing; Li, Jianhong; Xian, Junfang; Lv, Bin; Li, Meng; Wang, Chunheng; Li, Yong; Liu, Zhaohui; Liu, Sha; Wang, Zhenchang; He, Huiguang; Sabel, Bernhard A
2013-01-01
Prelingual deafness has been shown to lead to brain reorganization as demonstrated by functional parameters, but anatomical evidences still remain controversial. The present study investigated hemispheric asymmetry changes in deaf subjects using MRI, hypothesizing auditory-, language- or visual-related regions after early deafness. Prelingually deaf adolescents (n = 16) and age- and gender-matched normal controls (n = 16) were recruited and hemispheric asymmetry was evaluated with voxel-based morphometry (VBM) from MRI combined with analysis of cortical thickness (CTh). Deaf adolescents showed more rightward asymmetries (L < R) of grey matter volume (GMV) in the cerebellum and more leftward CTh asymmetries (L > R) in the posterior cingulate gyrus and gyrus rectus. More rightward CTh asymmetries were observed in the precuneus, middle and superior frontal gyri, and middle occipital gyrus. The duration of hearing aid use was correlated with asymmetry of GMV in the cerebellum and CTh in the gyrus rectus. Interestingly, the asymmetry of the auditory cortex was preserved in deaf subjects. When the brain is deprived of auditory input early in life there are signs of both irreversible morphological asymmetry changes in different brain regions but also signs of reorganization and plasticity which are dependent on hearing aid use, i.e. use-dependent.
Füllgrabe, Christian; Rosen, Stuart
2016-01-01
With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in the processing of speech in noise (SiN). The psychological construct that has received much interest in recent years is working memory. Empirical evidence indeed confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. However, some theoretical models propose that variations in WMC are an important predictor for variations in speech processing abilities in adverse perceptual conditions for all listeners, and this notion has become widely accepted within the field. To assess whether WMC also plays a role when listeners without hearing loss process speech in adverse listening conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification, using sentence material routinely used in audiological and hearing research. A meta-analysis revealed that, for young listeners with audiometrically normal hearing, individual variations in WMC are estimated to account for, on average, less than 2% of the variance in SiN identification scores. This result cautions against the (intuitively appealing) assumption that individual variations in WMC are predictive of SiN identification independently of the age and hearing status of the listener.
Stebbings, Kevin A; Choi, Hyun W; Ravindra, Aditya; Llano, Daniel Adolfo
2016-06-01
The relationships between oxidative stress in the hippocampus and other aging-related changes such as hearing loss, cortical thinning, or changes in body weight are not yet known. We measured the redox ratio in a number of neural structures in brain slices taken from young and aged mice. Hearing thresholds, body weight, and cortical thickness were also measured. We found striking aging-related increases in the redox ratio that were isolated to the stratum pyramidale, while such changes were not observed in thalamus or cortex. These changes were driven primarily by changes in flavin adenine dinucleotide, not nicotinamide adenine dinucleotide hydride. Multiple regression analysis suggested that neither hearing threshold nor cortical thickness independently contributed to this change in hippocampal redox ratio. However, body weight did independently contribute to predicted changes in hippocampal redox ratio. These data suggest that aging-related changes in hippocampal redox ratio are not a general reflection of overall brain oxidative state but are highly localized, while still being related to at least one marker of late aging, weight loss at the end of life. Copyright © 2016 Elsevier Inc. All rights reserved.
Maamor, Nashrah; Billings, Curtis J
2017-01-01
The purpose of this study was to determine the effects of noise type, signal-to-noise ratio (SNR), age, and hearing status on cortical auditory evoked potentials (CAEPs) to speech sounds. This helps to explain the hearing-in-noise difficulties often seen in the aging and hearing impaired population. Continuous, modulated, and babble noise types were presented at varying SNRs to 30 individuals divided into three groups according to age and hearing status. Significant main effects of noise type, SNR, and group were found. Interaction effects revealed that the SNR effect varies as a function of noise type and is most systematic for continuous noise. Effects of age and hearing loss were limited to CAEP latency and were differentially modulated by energetic and informational-like masking. It is clear that the spectrotemporal characteristics of signals and noises play an important role in determining the morphology of neural responses. Participant factors such as age and hearing status, also play an important role in determining the brain's response to complex auditory stimuli and contribute to the ability to listen in noise. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Nguyen, Anna; Khaleel, Haroun M; Razak, Khaleel A
2017-07-01
Noise induced hearing loss is associated with increased excitability in the central auditory system but the cellular correlates of such changes remain to be characterized. Here we tested the hypothesis that noise-induced hearing loss causes deterioration of perineuronal nets (PNNs) in the auditory cortex of mice. PNNs are specialized extracellular matrix components that commonly enwrap cortical parvalbumin (PV) containing GABAergic interneurons. Compared to somatosensory and visual cortex, relatively less is known about PV/PNN expression patterns in the primary auditory cortex (A1). Whether changes to cortical PNNs follow acoustic trauma remains unclear. The first aim of this study was to characterize PV/PNN expression in A1 of adult mice. PNNs increase excitability of PV+ inhibitory neurons and confer protection to these neurons against oxidative stress. Decreased PV/PNN expression may therefore lead to a reduction in cortical inhibition. The second aim of this study was to examine PV/PNN expression in superficial (I-IV) and deep cortical layers (V-VI) following noise trauma. Exposing mice to loud noise caused an increase in hearing threshold that lasted at least 30 days. PV and PNN expression in A1 was analyzed at 1, 10 and 30 days following the exposure. No significant changes were observed in the density of PV+, PNN+, or PV/PNN co-localized cells following hearing loss. However, a significant layer- and cell type-specific decrease in PNN intensity was seen following hearing loss. Some changes were present even at 1 day following noise exposure. Attenuation of PNN may contribute to changes in excitability in cortex following noise trauma. The regulation of PNN may open up a temporal window for altered excitability in the adult brain that is then stabilized at a new and potentially pathological level such as in tinnitus. Copyright © 2017 Elsevier B.V. All rights reserved.
Skoruppa, Katrin; Rosen, Stuart
2014-06-01
In this study, the authors explored phonological processing in connected speech in children with hearing loss. Specifically, the authors investigated these children's sensitivity to English place assimilation, by which alveolar consonants like t and n can adapt to following sounds (e.g., the word ten can be realized as tem in the phrase ten pounds). Twenty-seven 4- to 8-year-old children with moderate to profound hearing impairments, using hearing aids (n = 10) or cochlear implants (n = 17), and 19 children with normal hearing participated. They were asked to choose between pictures of familiar (e.g., pen) and unfamiliar objects (e.g., astrolabe) after hearing t- and n-final words in sentences. Standard pronunciations (Can you find the pen dear?) and assimilated forms in correct (… pem please?) and incorrect contexts (… pem dear?) were presented. As expected, the children with normal hearing chose the familiar object more often for standard forms and correct assimilations than for incorrect assimilations. Thus, they are sensitive to word-final place changes and compensate for assimilation. However, the children with hearing impairment demonstrated reduced sensitivity to word-final place changes, and no compensation for assimilation. Restricted analyses revealed that children with hearing aids who showed good perceptual skills compensated for assimilation in plosives only.
Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul
2016-01-01
Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception. PMID:27042360
Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc
2017-07-01
Speech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. One striking example comes from phonetic convergence, when speakers automatically tend to mimic their interlocutor's speech during communicative interaction. The goal of this study was to investigate sensory-motor linkage in speech production in postlingually deaf cochlear implanted participants and normal hearing elderly adults through phonetic convergence and imitation. To this aim, two vowel production tasks, with or without instruction to imitate an acoustic vowel, were proposed to three groups of young adults with normal hearing, elderly adults with normal hearing and post-lingually deaf cochlear-implanted patients. Measure of the deviation of each participant's f 0 from their own mean f 0 was measured to evaluate the ability to converge to each acoustic target. showed that cochlear-implanted participants have the ability to converge to an acoustic target, both intentionally and unintentionally, albeit with a lower degree than young and elderly participants with normal hearing. By providing evidence for phonetic convergence and speech imitation, these results suggest that, as in young adults, perceptuo-motor relationships are efficient in elderly adults with normal hearing and that cochlear-implanted adults recovered significant perceptuo-motor abilities following cochlear implantation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Auditory-evoked cortical activity: contribution of brain noise, phase locking, and spectral power
Harris, Kelly C.; Vaden, Kenneth I.; Dubno, Judy R.
2017-01-01
Background The N1-P2 is an obligatory cortical response that can reflect the representation of spectral and temporal characteristics of an auditory stimulus. Traditionally, mean amplitudes and latencies of the prominent peaks in the averaged response are compared across experimental conditions. Analyses of the peaks in the averaged response only reflect a subset of the data contained within the electroencephalogram (EEG) signal. We used single-trial analyses techniques to identify the contribution of brain noise, neural synchrony, and spectral power to the generation of P2 amplitude and how these variables may change across age group. This information is important for appropriate interpretation of event-related potentials (ERPs) results and in understanding of age-related neural pathologies. Methods EEG was measured from 25 younger and 25 older normal hearing adults. Age-related and individual differences in P2 response amplitudes, and variability in brain noise, phase locking value (PLV), and spectral power (4–8 Hz) were assessed from electrode FCz. Model testing and linear regression were used to determine the extent to which brain noise, PLV, and spectral power uniquely predicted P2 amplitudes and varied by age group. Results Younger adults had significantly larger P2 amplitudes, PLV, and power compared to older adults. Brain noise did not differ between age groups. The results of regression testing revealed that brain noise and PLV, but not spectral power were unique predictors of P2 amplitudes. Model fit was significantly better in younger than in older adults. Conclusions ERP analyses are intended to provide a better understanding of the underlying neural mechanisms that contribute to individual and group differences in behavior. The current results support that age-related declines in neural synchrony contribute to smaller P2 amplitudes in older normal hearing adults. Based on our results, we discuss potential models in which differences in neural synchrony and brain noise can account for associations with P2 amplitudes and behavior and potentially provide a better explanation of the neural mechanisms that underlie declines in auditory processing and training benefits. PMID:25046314
Background noise can enhance cortical auditory evoked potentials under certain conditions
Papesh, Melissa A.; Billings, Curtis J.; Baltzell, Lucas S.
2017-01-01
Objective To use cortical auditory evoked potentials (CAEPs) to understand neural encoding in background noise and the conditions under which noise enhances CAEP responses. Methods CAEPs from 16 normal-hearing listeners were recorded using the speech syllable/ba/presented in quiet and speech-shaped noise at signal-to-noise ratios of 10 and 30 dB. The syllable was presented binaurally and monaurally at two presentation rates. Results The amplitudes of N1 and N2 peaks were often significantly enhanced in the presence of low-level background noise relative to quiet conditions, while P1 and P2 amplitudes were consistently reduced in noise. P1 and P2 amplitudes were significantly larger during binaural compared to monaural presentations, while N1 and N2 peaks were similar between binaural and monaural conditions. Conclusions Methodological choices impact CAEP peaks in very different ways. Negative peaks can be enhanced by background noise in certain conditions, while positive peaks are generally enhanced by binaural presentations. Significance Methodological choices significantly impact CAEPs acquired in quiet and in noise. If CAEPs are to be used as a tool to explore signal encoding in noise, scientists must be cognizant of how differences in acquisition and processing protocols selectively shape CAEP responses. PMID:25453611
Gardner-Berry, Kirsty; Chang, Hsiuwen; Ching, Teresa Y. C.; Hou, Sanna
2016-01-01
With the introduction of newborn hearing screening, infants are being diagnosed with hearing loss during the first few months of life. For infants with a sensory/neural hearing loss (SNHL), the audiogram can be estimated objectively using auditory brainstem response (ABR) testing and hearing aids prescribed accordingly. However, for infants with auditory neuropathy spectrum disorder (ANSD) due to the abnormal/absent ABR waveforms, alternative measures of auditory function are needed to assess the need for amplification and evaluate whether aided benefit has been achieved. Cortical auditory evoked potentials (CAEPs) are used to assess aided benefit in infants with hearing loss; however, there is insufficient information regarding the relationship between stimulus audibility and CAEP detection rates. It is also not clear whether CAEP detection rates differ between infants with SNHL and infants with ANSD. This study involved retrospective collection of CAEP, hearing threshold, and hearing aid gain data to investigate the relationship between stimulus audibility and CAEP detection rates. The results demonstrate that increases in stimulus audibility result in an increase in detection rate. For the same range of sensation levels, there was no difference in the detection rates between infants with SNHL and ANSD. PMID:27587922
Asad, Areej Nimer; Purdy, Suzanne C; Ballard, Elaine; Fairgray, Liz; Bowen, Caroline
2018-04-27
In this descriptive study, phonological processes were examined in the speech of children aged 5;0-7;6 (years; months) with mild to profound hearing loss using hearing aids (HAs) and cochlear implants (CIs), in comparison to their peers. A second aim was to compare phonological processes of HA and CI users. Children with hearing loss (CWHL, N = 25) were compared to children with normal hearing (CWNH, N = 30) with similar age, gender, linguistic, and socioeconomic backgrounds. Speech samples obtained from a list of 88 words, derived from three standardized speech tests, were analyzed using the CASALA (Computer Aided Speech and Language Analysis) program to evaluate participants' phonological systems, based on lax (a process appeared at least twice in the speech of at least two children) and strict (a process appeared at least five times in the speech of at least two children) counting criteria. Developmental phonological processes were eliminated in the speech of younger and older CWNH while eleven developmental phonological processes persisted in the speech of both age groups of CWHL. CWHL showed a similar trend of age of elimination to CWNH, but at a slower rate. Children with HAs and CIs produced similar phonological processes. Final consonant deletion, weak syllable deletion, backing, and glottal replacement were present in the speech of HA users, affecting their overall speech intelligibility. Developmental and non-developmental phonological processes persist in the speech of children with mild to profound hearing loss compared to their peers with typical hearing. The findings indicate that it is important for clinicians to consider phonological assessment in pre-school CWHL and the use of evidence-based speech therapy in order to reduce non-developmental and non-age-appropriate developmental processes, thereby enhancing their speech intelligibility. Copyright © 2018 Elsevier Inc. All rights reserved.
Zhang, G-Y; Yang, M; Liu, B; Huang, Z-C; Li, J; Chen, J-Y; Chen, H; Zhang, P-P; Liu, L-J; Wang, J; Teng, G-J
2016-01-28
Previous studies often report that early auditory deprivation or congenital deafness contributes to cross-modal reorganization in the auditory-deprived cortex, and this cross-modal reorganization limits clinical benefit from cochlear prosthetics. However, there are inconsistencies among study results on cortical reorganization in those subjects with long-term unilateral sensorineural hearing loss (USNHL). It is also unclear whether there exists a similar cross-modal plasticity of the auditory cortex for acquired monaural deafness and early or congenital deafness. To address this issue, we constructed the directional brain functional networks based on entropy connectivity of resting-state functional MRI and researched changes of the networks. Thirty-four long-term USNHL individuals and seventeen normally hearing individuals participated in the test, and all USNHL patients had acquired deafness. We found that certain brain regions of the sensorimotor and visual networks presented enhanced synchronous output entropy connectivity with the left primary auditory cortex in the left long-term USNHL individuals as compared with normally hearing individuals. Especially, the left USNHL showed more significant changes of entropy connectivity than the right USNHL. No significant plastic changes were observed in the right USNHL. Our results indicate that the left primary auditory cortex (non-auditory-deprived cortex) in patients with left USNHL has been reorganized by visual and sensorimotor modalities through cross-modal plasticity. Furthermore, the cross-modal reorganization also alters the directional brain functional networks. The auditory deprivation from the left or right side generates different influences on the human brain. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Gordon-Salant, Sandra; Cole, Stacey Samuels
2016-01-01
This study aimed to determine if younger and older listeners with normal hearing who differ on working memory span perform differently on speech recognition tests in noise. Older adults typically exhibit poorer speech recognition scores in noise than younger adults, which is attributed primarily to poorer hearing sensitivity and more limited working memory capacity in older than younger adults. Previous studies typically tested older listeners with poorer hearing sensitivity and shorter working memory spans than younger listeners, making it difficult to discern the importance of working memory capacity on speech recognition. This investigation controlled for hearing sensitivity and compared speech recognition performance in noise by younger and older listeners who were subdivided into high and low working memory groups. Performance patterns were compared for different speech materials to assess whether or not the effect of working memory capacity varies with the demands of the specific speech test. The authors hypothesized that (1) normal-hearing listeners with low working memory span would exhibit poorer speech recognition performance in noise than those with high working memory span; (2) older listeners with normal hearing would show poorer speech recognition scores than younger listeners with normal hearing, when the two age groups were matched for working memory span; and (3) an interaction between age and working memory would be observed for speech materials that provide contextual cues. Twenty-eight older (61 to 75 years) and 25 younger (18 to 25 years) normal-hearing listeners were assigned to groups based on age and working memory status. Northwestern University Auditory Test No. 6 words and Institute of Electrical and Electronics Engineers sentences were presented in noise using an adaptive procedure to measure the signal-to-noise ratio corresponding to 50% correct performance. Cognitive ability was evaluated with two tests of working memory (Listening Span Test and Reading Span Test) and two tests of processing speed (Paced Auditory Serial Addition Test and The Letter Digit Substitution Test). Significant effects of age and working memory capacity were observed on the speech recognition measures in noise, but these effects were mediated somewhat by the speech signal. Specifically, main effects of age and working memory were revealed for both words and sentences, but the interaction between the two was significant for sentences only. For these materials, effects of age were observed for listeners in the low working memory groups only. Although all cognitive measures were significantly correlated with speech recognition in noise, working memory span was the most important variable accounting for speech recognition performance. The results indicate that older adults with high working memory capacity are able to capitalize on contextual cues and perform as well as young listeners with high working memory capacity for sentence recognition. The data also suggest that listeners with normal hearing and low working memory capacity are less able to adapt to distortion of speech signals caused by background noise, which requires the allocation of more processing resources to earlier processing stages. These results indicate that both younger and older adults with low working memory capacity and normal hearing are at a disadvantage for recognizing speech in noise.
Association between heart rhythm and cortical sound processing.
Marcomini, Renata S; Frizzo, Ana Claúdia F; de Góes, Viviane B; Regaçone, Simone F; Garner, David M; Raimundo, Rodrigo D; Oliveira, Fernando R; Valenti, Vitor E
2018-04-26
Sound signal processing signifies an important factor for human conscious communication and it may be assessed through cortical auditory evoked potentials (CAEP). Heart rate variability (HRV) provides information about heart rate autonomic regulation. We investigated the association between resting HRV and CAEP. We evaluated resting HRV in the time and frequency domain and the CAEP components. The subjects remained at rest for 10 minutes for HRV recording, then they performed the CAEP examinations through frequency and duration protocols in both ears. Linear regression indicated that the amplitude of the N2 wave of the CAEP in the left ear (not right ear) was significantly influenced by standard deviation of normal-to-normal RR-intervals (17.7%) and percentage of adjacent RR-intervals with a difference of duration greater than 50 milliseconds (25.3%) time domain HRV indices in the frequency protocol. In the duration protocol and in the left ear the latency of the P2 wave was significantly influenced by low (LF) (20.8%) and high frequency (HF) bands in normalized units (21%) and LF/HF ratio (22.4%) indices of HRV spectral analysis. The latency of the N2 wave was significantly influenced by LF (25.8%), HF (25.9%) and LF/HF (28.8%). In conclusion, we promote the supposition that resting heart rhythm is associated with thalamo-cortical, cortical-cortical and auditory cortex pathways involved with auditory processing in the right hemisphere.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-27
... areas of hearing and balance; smell and taste; and voice, speech, and language. The Strategic Plan... research training in the normal and disordered processes of hearing, balance, smell, taste, voice, speech... into three program areas: Hearing and balance; smell and taste; and voice, speech, and language. The...
Cortical thickness in neuropsychologically near-normal schizophrenia.
Cobia, Derin J; Csernansky, John G; Wang, Lei
2011-12-01
Schizophrenia is a severe psychiatric illness with widespread impairments of cognitive functioning; however, a certain percentage of subjects are known to perform in the normal range on neuropsychological measures. While the cognitive profiles of these individuals have been examined, there has been relatively little attention to the neuroanatomical characteristics of this important subgroup. The aims of this study were to statistically identify schizophrenia subjects with relatively normal cognition, examine their neuroanatomical characteristics relative to their more impaired counterparts using cortical thickness mapping, and to investigate relationships between these characteristics and demographic variables to better understand the nature of cognitive heterogeneity in schizophrenia. Clinical, neuropsychological, and MRI data were collected from schizophrenia (n = 79) and healthy subjects (n = 65). A series of clustering algorithms on neuropsychological scores was examined, and a 2-cluster solution that separated subjects into neuropsychologically near-normal (NPNN) and neuropsychologically impaired (NPI) groups was determined most appropriate. Surface-based cortical thickness mapping was utilized to examine differences in thinning among schizophrenia subtypes compared with the healthy participants. A widespread cortical thinning pattern characteristic of schizophrenia emerged in the NPI group, while NPNN subjects demonstrated very limited thinning relative to healthy comparison subjects. Analysis of illness duration indicated minimal effects on subtype classification and cortical thickness results. Findings suggest a strong link between cognitive impairment and cortical thinning in schizophrenia, where subjects with near-normal cognitive abilities also demonstrate near-normal cortical thickness patterns. While generally supportive of distinct etiological processes for cognitive subtypes, results provide direction for further examination of additional neuroanatomical differences. Copyright © 2011 Elsevier B.V. All rights reserved.
The effect of early visual deprivation on the neural bases of multisensory processing.
Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte
2015-06-01
Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Shaheen, Elham Ahmed; Shohdy, Sahar Saad; Abd Al Raouf, Mahmoud; Mohamed El Abd, Shereen; Abd Elhamid, Asmss
2011-09-01
Specific language impairment is a relatively common developmental condition in which a child fails to develop language at the typical rate despite normal general intellectual abilities, adequate exposure to language, and in the absence of hearing impairments, or neurological or psychiatric disorders. There is much controversy about the extent to which the auditory processing deficits are important in the genesis specific language impairment. The objective of this paper is to assess the higher cortical functions in children with specific language impairment, through assessing neurophysiological changes in order to correlate the results with the clinical picture of the patients to choose the proper rehabilitation training program. This study was carried out on 40 children diagnosed to have specific language impairment and 20 normal children as a control group. All children were subjected to the assessment protocol applied in Kasr El-Aini hospital. They were also subjected to a language test (receptive, expressive and total language items), the audio-vocal items of Illinois test of psycholinguistic (auditory reception, auditory association, verbal expression, grammatical closure, auditory sequential memory and sound blending) as well as audiological assessment that included peripheral audiological and P300 amplitude and latency assessment. The results revealed a highly significant difference in P300 amplitude and latency between specific language impairment group and control group. There is also strong correlations between P300 latency and the grammatical closure, auditory sequential memory and sound blending, while significant correlation between the P300 amplitude and auditory association and verbal expression. Children with specific language impairment, in spite of the normal peripheral hearing, have evidence of cognitive and central auditory processing defects as evidenced by P300 auditory event related potential in the form of prolonged latency which indicate a slow rate of processing and defective memory as evidenced by small amplitude. These findings affect cognitive and language development in specific language impairment children and should be considered during planning the intervention program. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Assessment of central auditory processing in a group of workers exposed to solvents.
Fuente, Adrian; McPherson, Bradley; Muñoz, Verónica; Pablo Espina, Juan
2006-12-01
Despite having normal hearing thresholds and speech recognition thresholds, results for central auditory tests were abnormal in a group of workers exposed to solvents. Workers exposed to solvents may have difficulties in everyday listening situations that are not related to a decrement in hearing thresholds. A central auditory processing disorder may underlie these difficulties. To study central auditory processing abilities in a group of workers occupationally exposed to a mix of organic solvents. Ten workers exposed to a mix of organic solvents and 10 matched non-exposed workers were studied. The test battery comprised pure-tone audiometry, tympanometry, acoustic reflex measurement, acoustic reflex decay, dichotic digit, pitch pattern sequence, masking level difference, filtered speech, random gap detection and hearing-in-noise tests. All the workers presented normal hearing thresholds and no signs of middle ear abnormalities. Workers exposed to solvents had lower results in comparison with the control group and previously reported normative data, in the majority of the tests.
Fundamental deficits of auditory perception in Wernicke's aphasia.
Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen
2013-01-01
This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Remijn, Gerard B.; Kikuchi, Mitsuru; Yoshimura, Yuko; Shitamichi, Kiyomi; Ueno, Sanae; Tsubokawa, Tsunehisa; Kojima, Haruyuki; Higashida, Haruhiro; Minabe, Yoshio
2017-01-01
Purpose: The purpose of this study was to assess cortical hemodynamic response patterns in 3- to 7-year-old children listening to two speech modes: normally vocalized and whispered speech. Understanding whispered speech requires processing of the relatively weak, noisy signal, as well as the cognitive ability to understand the speaker's reason for…
Barone, Pascal; Chambaudie, Laure; Strelnikov, Kuzma; Fraysse, Bernard; Marx, Mathieu; Belin, Pascal; Deguine, Olivier
2016-10-01
Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as face-voice interactions, which are important in social communication. We analyse and compare visual-auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual-auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face-voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorisation task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face-voice interaction; this is important for patient internal supramodal representation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fu, Qian-Jie; Chinchilla, Sherol; Galvin, John J
2004-09-01
The present study investigated the relative importance of temporal and spectral cues in voice gender discrimination and vowel recognition by normal-hearing subjects listening to an acoustic simulation of cochlear implant speech processing and by cochlear implant users. In the simulation, the number of speech processing channels ranged from 4 to 32, thereby varying the spectral resolution; the cutoff frequencies of the channels' envelope filters ranged from 20 to 320 Hz, thereby manipulating the available temporal cues. For normal-hearing subjects, results showed that both voice gender discrimination and vowel recognition scores improved as the number of spectral channels was increased. When only 4 spectral channels were available, voice gender discrimination significantly improved as the envelope filter cutoff frequency was increased from 20 to 320 Hz. For all spectral conditions, increasing the amount of temporal information had no significant effect on vowel recognition. Both voice gender discrimination and vowel recognition scores were highly variable among implant users. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to comparable speech processing (4-8 spectral channels). The results suggest that both spectral and temporal cues contribute to voice gender discrimination and that temporal cues are especially important for cochlear implant users to identify the voice gender when there is reduced spectral resolution.
Selective attention in normal and impaired hearing.
Shinn-Cunningham, Barbara G; Best, Virginia
2008-12-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.
Selective Attention in Normal and Impaired Hearing
Shinn-Cunningham, Barbara G.; Best, Virginia
2008-01-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202
Speech timing and working memory in profoundly deaf children after cochlear implantation
Burkholder, Rose A.; Pisoni, David B.
2012-01-01
Thirty-seven profoundly deaf children between 8- and 9-years-old with cochlear implants and a comparison group of normal-hearing children were studied to measure speaking rates, digit spans, and speech timing during digit span recall. The deaf children displayed longer sentence durations and pauses during recall and shorter digit spans compared to the normal-hearing children. Articulation rates, measured from sentence durations, were strongly correlated with immediate memory span in both normal-hearing and deaf children, indicating that both slower subvocal rehearsal and scanning processes may be factors that contribute to the deaf children’s shorter digit spans. These findings demonstrate that subvocal verbal rehearsal speed and memory scanning processes are not only dependent on chronological age as suggested in earlier research by Cowan and colleagues (1998). Instead, in this clinical population the absence of early auditory experience and phonological processing activities before implantation appears to produce measurable effects on the working memory processes that rely on verbal rehearsal and serial scanning of phonological information in short-term memory. PMID:12742763
Na, Wondo; Kim, Gibbeum; Kim, Gungu; Han, Woojae; Kim, Jinsook
2017-01-01
The current study aimed to evaluate hearing-related changes in terms of speech-in-noise processing, fast-rate speech processing, and working memory; and to identify which of these three factors is significantly affected by age-related hearing loss. One hundred subjects aged 65-84 years participated in the study. They were classified into four groups ranging from normal hearing to moderate-to-severe hearing loss. All the participants were tested for speech perception in quiet and noisy conditions and for speech perception with time alteration in quiet conditions. Forward- and backward-digit span tests were also conducted to measure the participants' working memory. 1) As the level of background noise increased, speech perception scores systematically decreased in all the groups. This pattern was more noticeable in the three hearing-impaired groups than in the normal hearing group. 2) As the speech rate increased faster, speech perception scores decreased. A significant interaction was found between speed of speech and hearing loss. In particular, 30% of compressed sentences revealed a clear differentiation between moderate hearing loss and moderate-to-severe hearing loss. 3) Although all the groups showed a longer span on the forward-digit span test than the backward-digit span test, there was no significant difference as a function of hearing loss. The degree of hearing loss strongly affects the speech recognition of babble-masked and time-compressed speech in the elderly but does not affect the working memory. We expect these results to be applied to appropriate rehabilitation strategies for hearing-impaired elderly who experience difficulty in communication.
ERIC Educational Resources Information Center
Markevych, Vladlena; Asbjornsen, Arve E.; Lind, Ola; Plante, Elena; Cone, Barbara
2011-01-01
The present study investigated a possible connection between speech processing and cochlear function. Twenty-two subjects with age range from 18 to 39, balanced for gender with normal hearing and without any known neurological condition, were tested with the dichotic listening (DL) test, in which listeners were asked to identify CV-syllables in a…
Musical hallucination associated with hearing loss.
Sanchez, Tanit Ganz; Rocha, Savya Cybelle Milhomem; Knobel, Keila Alessandra Baraldi; Kii, Márcia Akemi; Santos, Rosa Maria Rodrigues dos; Pereira, Cristiana Borges
2011-01-01
In spite of the fact that musical hallucination have a significant impact on patients' lives, they have received very little attention of experts. Some researchers agree on a combination of peripheral and central dysfunctions as the mechanism that causes hallucination. The most accepted physiopathology of musical hallucination associated to hearing loss (caused by cochlear lesion, cochlear nerve lesion or by interruption of mesencephalon or pontine auditory information) is the disinhibition of auditory memory circuits due to sensory deprivation. Concerning the cortical area involved in musical hallucination, there is evidence that the excitatory mechanism of the superior temporal gyrus, as in epilepsies, is responsible for musical hallucination. In musical release hallucination there is also activation of the auditory association cortex. Finally, considering the laterality, functional studies with musical perception and imagery in normal individuals showed that songs with words cause bilateral temporal activation and melodies activate only the right lobe. The effect of hearing aids on the improvement of musical hallucination as a result of the hearing loss improvement is well documented. It happens because auditory hallucination may be influenced by the external acoustical environment. Neuroleptics, antidepressants and anticonvulsants have been used in the treatment of musical hallucination. Cases of improvement with the administration of carbamazepine, meclobemide and donepezil were reported, but the results obtained were not consistent.
Neural Correlates of Early Sound Encoding and their Relationship to Speech-in-Noise Perception
Coffey, Emily B. J.; Chepesiuk, Alexander M. P.; Herholz, Sibylle C.; Baillet, Sylvain; Zatorre, Robert J.
2017-01-01
Speech-in-noise (SIN) perception is a complex cognitive skill that affects social, vocational, and educational activities. Poor SIN ability particularly affects young and elderly populations, yet varies considerably even among healthy young adults with normal hearing. Although SIN skills are known to be influenced by top-down processes that can selectively enhance lower-level sound representations, the complementary role of feed-forward mechanisms and their relationship to musical training is poorly understood. Using a paradigm that minimizes the main top-down factors that have been implicated in SIN performance such as working memory, we aimed to better understand how robust encoding of periodicity in the auditory system (as measured by the frequency-following response) contributes to SIN perception. Using magnetoencephalograpy, we found that the strength of encoding at the fundamental frequency in the brainstem, thalamus, and cortex is correlated with SIN accuracy. The amplitude of the slower cortical P2 wave was previously also shown to be related to SIN accuracy and FFR strength; we use MEG source localization to show that the P2 wave originates in a temporal region anterior to that of the cortical FFR. We also confirm that the observed enhancements were related to the extent and timing of musicianship. These results are consistent with the hypothesis that basic feed-forward sound encoding affects SIN perception by providing better information to later processing stages, and that modifying this process may be one mechanism through which musical training might enhance the auditory networks that subserve both musical and language functions. PMID:28890684
ERIC Educational Resources Information Center
Swartz, Daniel B.
This study examined four male homosexual, sociocultural groups: normal-hearing homosexuals with normal-hearing parents, deaf homosexuals with normal-hearing parents, deaf homosexuals with hearing-impaired parents, and hard-of-hearing homosexuals with normal-hearing parents. Differences with regard to self-perception, identity, and attitudes were…
2014-01-01
Background Type II focal cortical dysplasias (FCDs) are malformations of cortical development characterised by the disorganisation of the normal neocortical structure and the presence of dysmorphic neurons (DNs) and balloon cells (BCs). The pathogenesis of FCDs has not yet been clearly established, although a number of histopathological patterns and molecular findings suggest that they may be due to abnormal neuronal and glial proliferation and migration processes. In order to gain further insights into cortical layering disruption and investigate the origin of DNs and BCs, we used in situ RNA hybridisation of human surgical specimens with a neuropathologically definite diagnosis of Type IIa/b FCD and a panel of layer-specific genes (LSGs) whose expression covers all cortical layers. We also used anti-phospho-S6 ribosomal protein antibody to investigate mTOR pathway hyperactivation. Results LSGs were expressed in both normal and abnormal cells (BCs and DNs) but their distribution was different. Normal-looking neurons, which were visibly reduced in the core of the lesion, were apparently located in the appropriate cortical laminae thus indicating a partial laminar organisation. On the contrary, DNs and BCs, labelled with anti-phospho-S6 ribosomal protein antibody, were spread throughout the cortex without any apparent rule and showed a highly variable LSG expression pattern. Moreover, LSGs did not reveal any differences between Type IIa and IIb FCD. Conclusion These findings suggest the existence of hidden cortical lamination involving normal-looking neurons, which retain their ability to migrate correctly in the cortex, unlike DNs which, in addition to their morphological abnormalities and mTOR hyperactivation, show an altered migratory pattern. Taken together these data suggest that an external or environmental hit affecting selected precursor cells during the very early stages of cortical development may disrupt normal cortical development. PMID:24735483
Cortical Measures of Binaural Processing Predict Spatial Release from Masking Performance
Papesh, Melissa A.; Folmer, Robert L.; Gallun, Frederick J.
2017-01-01
Binaural sensitivity is an important contributor to the ability to understand speech in adverse acoustical environments such as restaurants and other social gatherings. The ability to accurately report on binaural percepts is not commonly measured, however, as extensive training is required before reliable measures can be obtained. Here, we investigated the use of auditory evoked potentials (AEPs) as a rapid physiological indicator of detection of interaural phase differences (IPDs) by assessing cortical responses to 180° IPDs embedded in amplitude-modulated carrier tones. We predicted that decrements in encoding of IPDs would be evident in middle age, with further declines found with advancing age and hearing loss. Thus, participants in experiment #1 were young to middle-aged adults with relatively good hearing thresholds while participants in experiment #2 were older individuals with typical age-related hearing loss. Results revealed that while many of the participants in experiment #1 could encode IPDs in stimuli up to 1,000 Hz, few of the participants in experiment #2 had discernable responses to stimuli above 750 Hz. These results are consistent with previous studies that have found that aging and hearing loss impose frequency limits on the ability to encode interaural phase information present in the fine structure of auditory stimuli. We further hypothesized that AEP measures of binaural sensitivity would be predictive of participants' ability to benefit from spatial separation between sound sources, a phenomenon known as spatial release from masking (SRM) which depends upon binaural cues. Results indicate that not only were objective IPD measures well correlated with and predictive of behavioral SRM measures in both experiments, but that they provided much stronger predictive value than age or hearing loss. Overall, the present work shows that objective measures of the encoding of interaural phase information can be readily obtained using commonly available AEP equipment, allowing accurate determination of the degree to which binaural sensitivity has been reduced in individual listeners due to aging and/or hearing loss. In fact, objective AEP measures of interaural phase encoding are actually better predictors of SRM in speech-in-speech conditions than are age, hearing loss, or the combination of age and hearing loss. PMID:28377706
Cortical Measures of Binaural Processing Predict Spatial Release from Masking Performance.
Papesh, Melissa A; Folmer, Robert L; Gallun, Frederick J
2017-01-01
Binaural sensitivity is an important contributor to the ability to understand speech in adverse acoustical environments such as restaurants and other social gatherings. The ability to accurately report on binaural percepts is not commonly measured, however, as extensive training is required before reliable measures can be obtained. Here, we investigated the use of auditory evoked potentials (AEPs) as a rapid physiological indicator of detection of interaural phase differences (IPDs) by assessing cortical responses to 180° IPDs embedded in amplitude-modulated carrier tones. We predicted that decrements in encoding of IPDs would be evident in middle age, with further declines found with advancing age and hearing loss. Thus, participants in experiment #1 were young to middle-aged adults with relatively good hearing thresholds while participants in experiment #2 were older individuals with typical age-related hearing loss. Results revealed that while many of the participants in experiment #1 could encode IPDs in stimuli up to 1,000 Hz, few of the participants in experiment #2 had discernable responses to stimuli above 750 Hz. These results are consistent with previous studies that have found that aging and hearing loss impose frequency limits on the ability to encode interaural phase information present in the fine structure of auditory stimuli. We further hypothesized that AEP measures of binaural sensitivity would be predictive of participants' ability to benefit from spatial separation between sound sources, a phenomenon known as spatial release from masking (SRM) which depends upon binaural cues. Results indicate that not only were objective IPD measures well correlated with and predictive of behavioral SRM measures in both experiments, but that they provided much stronger predictive value than age or hearing loss. Overall, the present work shows that objective measures of the encoding of interaural phase information can be readily obtained using commonly available AEP equipment, allowing accurate determination of the degree to which binaural sensitivity has been reduced in individual listeners due to aging and/or hearing loss. In fact, objective AEP measures of interaural phase encoding are actually better predictors of SRM in speech-in-speech conditions than are age, hearing loss, or the combination of age and hearing loss.
Sad and happy emotion discrimination in music by children with cochlear implants.
Hopyan, Talar; Manno, Francis A M; Papsin, Blake C; Gordon, Karen A
2016-01-01
Children using cochlear implants (CIs) develop speech perception but have difficulty perceiving complex acoustic signals. Mode and tempo are the two components used to recognize emotion in music. Based on CI limitations, we hypothesized children using CIs would have impaired perception of mode cues relative to their normal hearing peers and would rely more heavily on tempo cues to distinguish happy from sad music. Study participants were children with 13 right CIs and 3 left CIs (M = 12.7, SD = 2.6 years) and 16 normal hearing peers. Participants judged 96 brief piano excerpts from the classical genre as happy or sad in a forced-choice task. Music was randomly presented with alterations of transposed mode, tempo, or both. When music was presented in original form, children using CIs discriminated between happy and sad music with accuracy well above chance levels (87.5%) but significantly below those with normal hearing (98%). The CI group primarily used tempo cues, whereas normal hearing children relied more on mode cues. Transposing both mode and tempo cues in the same musical excerpt obliterated cues to emotion for both groups. Children using CIs showed significantly slower response times across all conditions. Children using CIs use tempo cues to discriminate happy versus sad music reflecting a very different hearing strategy than their normal hearing peers. Slower reaction times by children using CIs indicate that they found the task more difficult and support the possibility that they require different strategies to process emotion in music than normal.
Auditory phonological priming in children and adults during word repetition
NASA Astrophysics Data System (ADS)
Cleary, Miranda; Schwartz, Richard G.
2004-05-01
Short-term auditory phonological priming effects involve changes in the speed with which words are processed by a listener as a function of recent exposure to other similar-sounding words. Activation of phonological/lexical representations appears to persist beyond the immediate offset of a word, influencing subsequent processing. Priming effects are commonly cited as demonstrating concurrent activation of word/phonological candidates during word identification. Phonological priming is controversial, the direction of effects (facilitating versus slowing) varying with the prime-target relationship. In adults, it has repeatedly been demonstrated, however, that hearing a prime word that rhymes with the following target word (ISI=50 ms) decreases the time necessary to initiate repetition of the target, relative to when the prime and target have no phonemic overlap. Activation of phonological representations in children has not typically been studied using this paradigm, auditory-word + picture-naming tasks being used instead. The present study employed an auditory phonological priming paradigm being developed for use with normal-hearing and hearing-impaired children. Initial results from normal-hearing adults replicate previous reports of faster naming times for targets following a rhyming prime word than for targets following a prime having no phonemes in common. Results from normal-hearing children will also be reported. [Work supported by NIH-NIDCD T32DC000039.
Alderete, Tanya L.; Chang, Daniel
2010-01-01
The cortical nucleus LMAN (lateral magnocellular nucleus of the anterior nidopallium) provides the output of a basal ganglia pathway that is necessary for acquisition of learned vocal behavior during development in songbirds. LMAN is composed of two subregions, a core and a surrounding shell, that give rise to independent pathways that traverse the forebrain in parallel. The LMANshell pathway forms a recurrent loop that includes a cortical region, the dorsal region of the caudolateral nidopallium (dNCL), hitherto unknown to be involved with learned vocal behavior. Here we show that vocal production strongly induces the IEG product ZENK in dNCL of zebra finches. Hearing tutor song while singing is more effective at inducing expression in dNCL of juvenile birds during the auditory–motor integration stage of vocal learning than is hearing conspecific song. In contrast, hearing conspecific song is relatively more effective at inducing expression in adult birds, regardless of whether they are producing song. Furthermore, ZENK+ neurons in dNCL include projection neurons that are part of the LMANshell recurrent loop and a high proportion of dNCL projection neurons express ZENK in singing juvenile birds that hear tutor song. Thus juvenile birds that are actively refining their vocal pattern to imitate a tutor song show high levels of ZENK induction in dNCL neurons when they are singing while hearing the song of their tutor and low levels when they hear a novel conspecific. This pattern indicates that dNCL is a novel brain region involved with vocal learning and that its function is developmentally regulated. PMID:20107119
Engle, James R.; Recanzone, Gregg H.
2012-01-01
Age-related hearing deficits are a leading cause of disability among the aged. While some forms of hearing deficits are peripheral in origin, others are centrally mediated. One such deficit is the ability to localize sounds, a critical component for segregating different acoustic objects and events, which is dependent on the auditory cortex. Recent evidence indicates that in aged animals the normal sharpening of spatial tuning between neurons in primary auditory cortex to the caudal lateral field does not occur as it does in younger animals. As a decrease in inhibition with aging is common in the ascending auditory system, it is possible that this lack of spatial tuning sharpening is due to a decrease in inhibition at different periods within the response. It is also possible that spatial tuning was decreased as a consequence of reduced inhibition at non-best locations. In this report we found that aged animals had greater activity throughout the response period, but primarily during the onset of the response. This was most prominent at non-best directions, which is consistent with the hypothesis that inhibition is a primary mechanism for sharpening spatial tuning curves. We also noted that in aged animals the latency of the response was much shorter than in younger animals, which is consistent with a decrease in pre-onset inhibition. These results can be interpreted in the context of a failure of the timing and efficiency of feed-forward thalamo-cortical and cortico-cortical circuits in aged animals. Such a mechanism, if generalized across cortical areas, could play a major role in age-related cognitive decline. PMID:23316160
SDF1 regulates leading process branching and speed of migrating interneurons
Lysko, Daniel E.; Putt, Mary; Golden, Jeffrey A.
2011-01-01
Cell migration is required for normal embryonic development, yet how cells navigate complex paths while integrating multiple guidance cues remains poorly understood. During brain development, interneurons migrate from the ventral ganglionic eminence to the cerebral cortex within several migratory streams. They must exit these streams to invade the cortical plate. While SDF1-signaling is necessary for normal interneuron stream migration, how they switch from tangential stream migration to invade the cortical plate is unknown. Here we demonstrate that SDF1-signaling reduces interneuron branching frequency by reducing cAMP levels via a Gi-signaling pathway using an in vitro mouse explant system, resulting in the maintenance of stream migration. Blocking SDF1-signaling, or increasing branching frequency, results in stream exit and cortical plate invasion in mouse brain slices. These data support a novel model to understand how migrating interneurons switch from tangential migration to invade the cortical plate in which reducing SDF1-signaling increases leading process branching and slows the migration rate, permitting migrating interneurons to sense cortically directed guidance cues. PMID:21289183
Van Dun, Bram; Wouters, Jan; Moonen, Marc
2009-07-01
Auditory steady-state responses (ASSRs) are used for hearing threshold estimation at audiometric frequencies. Hearing impaired newborns, in particular, benefit from this technique as it allows for a more precise diagnosis than traditional techniques, and a hearing aid can be better fitted at an early age. However, measurement duration of current single-channel techniques is still too long for clinical widespread use. This paper evaluates the practical performance of a multi-channel electroencephalogram (EEG) processing strategy based on a detection theory approach. A minimum electrode set is determined for ASSRs with frequencies between 80 and 110 Hz using eight-channel EEG measurements of ten normal-hearing adults. This set provides a near-optimal hearing threshold estimate for all subjects and improves response detection significantly for EEG data with numerous artifacts. Multi-channel processing does not significantly improve response detection for EEG data with few artifacts. In this case, best response detection is obtained when noise-weighted averaging is applied on single-channel data. The same test setup (eight channels, ten normal-hearing subjects) is also used to determine a minimum electrode setup for 10-Hz ASSRs. This configuration allows to record near-optimal signal-to-noise ratios for 80% of subjects.
P300 in individuals with sensorineural hearing loss.
Reis, Ana Cláudia Mirandola Barbosa; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima; Garcia, Cristiane Fregonesi Dutra; Funayama, Carolina Araújo Rodrigues; Iório, Maria Cecília Martinelli
2015-01-01
Behavioral and electrophysiological auditory evaluations contribute to the understanding of the auditory system and of the process of intervention. To study P300 in subjects with severe or profound sensorineural hearing loss. This was a descriptive cross-sectional prospective study. It included 29 individuals of both genders with severe or profound sensorineural hearing loss without other type of disorders, aged 11 to 42 years; all were assessed by behavioral audiological evaluation and auditory evoked potentials. A recording of the P3 wave was obtained in 17 individuals, with a mean latency of 326.97ms and mean amplitude of 3.76V. There were significant differences in latency in relation to age and in amplitude according to degree of hearing loss. There was a statistically significant association of the P300 results with the degrees of hearing loss (p=0.04), with the predominant auditory communication channels (p<0.0001), and with time of hearing loss. P300 can be recorded in individuals with severe and profound congenital sensorineural hearing loss; it may contribute to the understanding of cortical development and is a good predictor of the early intervention outcome. Copyright © 2014 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Westman, Eric; Aguilar, Carlos; Muehlboeck, J-Sebastian; Simmons, Andrew
2013-01-01
Automated structural magnetic resonance imaging (MRI) processing pipelines are gaining popularity for Alzheimer's disease (AD) research. They generate regional volumes, cortical thickness measures and other measures, which can be used as input for multivariate analysis. It is not clear which combination of measures and normalization approach are most useful for AD classification and to predict mild cognitive impairment (MCI) conversion. The current study includes MRI scans from 699 subjects [AD, MCI and controls (CTL)] from the Alzheimer's disease Neuroimaging Initiative (ADNI). The Freesurfer pipeline was used to generate regional volume, cortical thickness, gray matter volume, surface area, mean curvature, gaussian curvature, folding index and curvature index measures. 259 variables were used for orthogonal partial least square to latent structures (OPLS) multivariate analysis. Normalisation approaches were explored and the optimal combination of measures determined. Results indicate that cortical thickness measures should not be normalized, while volumes should probably be normalized by intracranial volume (ICV). Combining regional cortical thickness measures (not normalized) with cortical and subcortical volumes (normalized with ICV) using OPLS gave a prediction accuracy of 91.5 % when distinguishing AD versus CTL. This model prospectively predicted future decline from MCI to AD with 75.9 % of converters correctly classified. Normalization strategy did not have a significant effect on the accuracies of multivariate models containing multiple MRI measures for this large dataset. The appropriate choice of input for multivariate analysis in AD and MCI is of great importance. The results support the use of un-normalised cortical thickness measures and volumes normalised by ICV.
Activation of auditory cortex by anticipating and hearing emotional sounds: an MEG study.
Yokosawa, Koichi; Pamilo, Siina; Hirvenkari, Lotta; Hari, Riitta; Pihko, Elina
2013-01-01
To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period.
Activation of Auditory Cortex by Anticipating and Hearing Emotional Sounds: An MEG Study
Yokosawa, Koichi; Pamilo, Siina; Hirvenkari, Lotta; Hari, Riitta; Pihko, Elina
2013-01-01
To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period. PMID:24278270
Tinnitus. I: Auditory mechanisms: a model for tinnitus and hearing impairment.
Hazell, J W; Jastreboff, P J
1990-02-01
A model is proposed for tinnitus and sensorineural hearing loss involving cochlear pathology. As tinnitus is defined as a cortical perception of sound in the absence of an appropriate external stimulus it must result from a generator in the auditory system which undergoes extensive auditory processing before it is perceived. The concept of spatial nonlinearity in the cochlea is presented as a cause of tinnitus generation controlled by the efferents. Various clinical presentations of tinnitus and the way in which they respond to changes in the environment are discussed with respect to this control mechanism. The concept of auditory retraining as part of the habituation process, and interaction with the prefrontal cortex and limbic system is presented as a central model which emphasizes the importance of the emotional significance and meaning of tinnitus.
Effect of Exogenous Cues on Covert Spatial Orienting in Deaf and Normal Hearing Individuals
Prasad, Seema Gorur; Patil, Gouri Shanker; Mishra, Ramesh Kumar
2015-01-01
Deaf individuals have been known to process visual stimuli better at the periphery compared to the normal hearing population. However, very few studies have examined attention orienting in the oculomotor domain in the deaf, particularly when targets appear at variable eccentricity. In this study, we examined if the visual perceptual processing advantage reported in the deaf people also modulates spatial attentional orienting with eye movement responses. We used a spatial cueing task with cued and uncued targets that appeared at two different eccentricities and explored attentional facilitation and inhibition. We elicited both a saccadic and a manual response. The deaf showed a higher cueing effect for the ocular responses than the normal hearing participants. However, there was no group difference for the manual responses. There was also higher facilitation at the periphery for both saccadic and manual responses, irrespective of groups. These results suggest that, owing to their superior visual processing ability, the deaf may orient attention faster to targets. We discuss the results in terms of previous studies on cueing and attentional orienting in deaf. PMID:26517363
Effect of Exogenous Cues on Covert Spatial Orienting in Deaf and Normal Hearing Individuals.
Prasad, Seema Gorur; Patil, Gouri Shanker; Mishra, Ramesh Kumar
2015-01-01
Deaf individuals have been known to process visual stimuli better at the periphery compared to the normal hearing population. However, very few studies have examined attention orienting in the oculomotor domain in the deaf, particularly when targets appear at variable eccentricity. In this study, we examined if the visual perceptual processing advantage reported in the deaf people also modulates spatial attentional orienting with eye movement responses. We used a spatial cueing task with cued and uncued targets that appeared at two different eccentricities and explored attentional facilitation and inhibition. We elicited both a saccadic and a manual response. The deaf showed a higher cueing effect for the ocular responses than the normal hearing participants. However, there was no group difference for the manual responses. There was also higher facilitation at the periphery for both saccadic and manual responses, irrespective of groups. These results suggest that, owing to their superior visual processing ability, the deaf may orient attention faster to targets. We discuss the results in terms of previous studies on cueing and attentional orienting in deaf.
Differences in interregional brain connectivity in children with unilateral hearing loss.
Jung, Matthew E; Colletta, Miranda; Coalson, Rebecca; Schlaggar, Bradley L; Lieu, Judith E C
2017-11-01
To identify functional network architecture differences in the brains of children with unilateral hearing loss (UHL) using resting-state functional-connectivity magnetic resonance imaging (rs-fcMRI). Prospective observational study. Children (7 to 17 years of age) with severe to profound hearing loss in one ear, along with their normal hearing (NH) siblings, were recruited and imaged using rs-fcMRI. Eleven children had right UHL; nine had left UHL; and 13 had normal hearing. Forty-one brain regions of interest culled from established brain networks such as the default mode (DMN); cingulo-opercular (CON); and frontoparietal networks (FPN); as well as regions for language, phonological, and visual processing, were analyzed using regionwise correlations and conjunction analysis to determine differences in functional connectivity between the UHL and normal hearing children. When compared to the NH group, children with UHL showed increased connectivity patterns between multiple networks, such as between the CON and visual processing centers. However, there were decreased, as well as aberrant connectivity patterns with the coactivation of the DMN and FPN, a relationship that usually is negatively correlated. Children with UHL demonstrate multiple functional connectivity differences between brain networks involved with executive function, cognition, and language comprehension that may represent adaptive as well as maladaptive changes. These findings suggest that possible interventions or habilitation, beyond amplification, might be able to affect some children's requirement for additional help at school. 3b. Laryngoscope, 127:2636-2645, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
What Can We Learn about Auditory Processing from Adult Hearing Questionnaires?
Bamiou, Doris-Eva; Iliadou, Vasiliki Vivian; Zanchetta, Sthella; Spyridakou, Chrysa
2015-01-01
Questionnaires addressing auditory disability may identify and quantify specific symptoms in adult patients with listening difficulties. (1) To assess validity of the Speech, Spatial, and Qualities of Hearing Scale (SSQ), the (Modified) Amsterdam Inventory for Auditory Disability (mAIAD), and the Hyperacusis Questionnaire (HYP) in adult patients experiencing listening difficulties in the presence of a normal audiogram. (2) To examine which individual questionnaire items give the worse scores in clinical participants with an auditory processing disorder (APD). A prospective correlational analysis study. Clinical participants (N = 58) referred for assessment because of listening difficulties in the presence of normal audiometric thresholds to audiology/ear, nose, and throat or audiovestibular medicine clinics. Normal control participants (N = 30). The mAIAD, HYP, and the SSQ were administered to a clinical population of nonneurological adults who were referred for auditory processing (AP) assessment because of hearing complaints, in the presence of normal audiogram and cochlear function, and to a sample of age-matched normal-hearing controls, before the AP testing. Clinical participants with abnormal results in at least one ear and in at least two tests of AP (and at least one of these tests to be nonspeech) were classified as clinical APD (N = 39), and the remaining (16 of whom had a single test abnormality) as clinical non-APD (N = 19). The SSQ correlated strongly with the mAIAD and the HYP, and correlation was similar within the clinical group and the normal controls. All questionnaire total scores and subscores (except sound distinction of mAIAD) were significantly worse in the clinical APD versus the normal group, while questionnaire total scores and most subscores indicated greater listening difficulties for the clinical non-APD versus the normal subgroups. Overall, the clinical non-APD group tended to give better scores than the APD in all questionnaires administered. Correlation was strong for the worse-ear gaps-in-noise threshold with the SSQ, mAIAD, and HYP; strong to moderate for the speech in babble and left-ear dichotic digit test scores (at p < 0.01); and weak to moderate for the remaining AP tests except the frequency pattern test that did not correlate. The worse-scored items in all three questionnaires concerned speech-in-noise questions. This is similar to worse-scored items by hearing-impaired participants as reported in the literature. Worse-scored items of the clinical group also included quality aspects of listening questions from the SSQ, which most likely pertain to cognitive aspects of listening, such as ability to ignore other sounds and listening effort. Hearing questionnaires may help assess symptoms of adults with APD. The listening difficulties and needs of adults with APD to some extent overlap with those of hearing-impaired listeners, but there are significant differences. The correlation of the gaps-in-noise and duration pattern (but not frequency pattern) tests with the questionnaire scores indicates that temporal processing deficits may play an important role in clinical presentation. American Academy of Audiology.
Temporal and speech processing skills in normal hearing individuals exposed to occupational noise.
Kumar, U Ajith; Ameenudin, Syed; Sangamanatha, A V
2012-01-01
Prolonged exposure to high levels of occupational noise can cause damage to hair cells in the cochlea and result in permanent noise-induced cochlear hearing loss. Consequences of cochlear hearing loss on speech perception and psychophysical abilities have been well documented. Primary goal of this research was to explore temporal processing and speech perception Skills in individuals who are exposed to occupational noise of more than 80 dBA and not yet incurred clinically significant threshold shifts. Contribution of temporal processing skills to speech perception in adverse listening situation was also evaluated. A total of 118 participants took part in this research. Participants comprised three groups of train drivers in the age range of 30-40 (n= 13), 41 50 ( = 13), 41-50 (n = 9), and 51-60 (n = 6) years and their non-noise-exposed counterparts (n = 30 in each age group). Participants of all the groups including the train drivers had hearing sensitivity within 25 dB HL in the octave frequencies between 250 and 8 kHz. Temporal processing was evaluated using gap detection, modulation detection, and duration pattern tests. Speech recognition was tested in presence multi-talker babble at -5dB SNR. Differences between experimental and control groups were analyzed using ANOVA and independent sample t-tests. Results showed a trend of reduced temporal processing skills in individuals with noise exposure. These deficits were observed despite normal peripheral hearing sensitivity. Speech recognition scores in the presence of noise were also significantly poor in noise-exposed group. Furthermore, poor temporal processing skills partially accounted for the speech recognition difficulties exhibited by the noise-exposed individuals. These results suggest that noise can cause significant distortions in the processing of suprathreshold temporal cues which may add to difficulties in hearing in adverse listening conditions.
Low empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls.
Netten, Anouk P; Rieffe, Carolien; Theunissen, Stephanie C P M; Soede, Wim; Dirks, Evelien; Briaire, Jeroen J; Frijns, Johan H M
2015-01-01
The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children's level of empathy, their attendance to others' emotions, emotion recognition, and supportive behavior. Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children. Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships.
Chao, Linda L; Reeb, Rosemary; Esparza, Iva L; Abadjian, Linda R
2016-03-01
We previously reported evidence of reduced cortical gray matter (GM), white matter (WM), and hippocampal volume in Gulf War (GW) veterans with predicted exposure to low-levels of nerve agent according to the 2000 Khamisiyah plume model analysis. Because there is suggestive evidence that other nerve agent exposures may have occurred during the Gulf War, we examined the association between the self-reported frequency of hearing chemical alarms sound during deployment in the Gulf War and regional brain volume in GW veterans. Ninety consecutive GW veterans (15 female, mean age: 52±8years) participating in a VA-funded study underwent structural magnetic resonance imaging (MRI) on a 3T scanner. Freesurfer (version 5.1) was used to obtain regional measures of cortical GM, WM, hippocampal, and insula volume. Multiple linear regression was used to determine the association between the self-reported frequencies of hearing chemical alarms during the Gulf War and regional brain volume. There was an inverse association between the self-reported frequency of hearing chemical alarms sound and total cortical GM (adjusted p=0.007), even after accounting for potentially confounding demographic and clinical variables, the veterans' current health status, and other concurrent deployment-related exposures that were correlated with hearing chemical alarms. Post-hoc analyses extended the inverse relationship between the frequency of hearing chemical alarms to GM volume in the frontal (adjusted p=0.02), parietal (adjusted p=0.01), and occipital (adjusted p=0.001) lobes. In contrast, regional brain volumes were not significantly associated with predicted exposure to the Khamisiyah plume or with Gulf War Illness status defined by the Kansas or Centers for Disease Control and Prevention criteria. Many veterans reported hearing chemical alarms sound during the Gulf War. The current findings suggest that exposure to substances that triggered those chemical alarms during the Gulf War likely had adverse neuroanatomical effects. Published by Elsevier B.V.
Chao, Linda L.; Reeb, Rosemary; Esparza, Iva L.; Abadjian, Linda R.
2017-01-01
Background We previously reported evidence of reduced cortical gray matter (GM), white matter (WM), and hippocampal volume in Gulf War (GW) veterans with predicted exposure to low-levels of nerve agent according to the 2000 Khamisiyah plume model analysis. Because there is suggestive evidence that other nerve agent exposures may have occurred during the Gulf War, we examined the association between the self-reported frequency of hearing chemical alarms sound during deployment in the Gulf War and regional brain volume in GW veterans. Methods Ninety consecutive GW veterans (15 female, mean age: 52±8 years) participating in a VA-funded study underwent structural magnetic resonance imaging (MRI) on a 3 T scanner. Freesurfer (version 5.1) was used to obtain regional measures of cortical GM, WM, hippocampal, and insula volume. Multiple linear regression was used to determine the association between the self-reported frequencies of hearing chemical alarms during the Gulf War and regional brain volume. Results There was an inverse association between the self-reported frequency of hearing chemical alarms sound and total cortical GM (adjusted p = 0.007), even after accounting for potentially confounding demographic and clinical variables, the veterans’ current health status, and other concurrent deployment-related exposures that were correlated with hearing chemical alarms. Post-hoc analyses extended the inverse relationship between the frequency of hearing chemical alarms to GM volume in the frontal (adjusted p = 0.02), parietal (adjusted p = 0.01), and occipital (adjusted p = 0.001) lobes. In contrast, regional brain volumes were not significantly associated with predicted exposure to the Khamisiyah plume or with Gulf War Illness status defined by the Kansas or Centers for Disease Control and Prevention criteria. Conclusions Many veterans reported hearing chemical alarms sound during the Gulf War. The current findings suggest that exposure to substances that triggered those chemical alarms during the Gulf War likely had adverse neuroanatomical effects. PMID:26920621
Enhancing the Induction Skill of Deaf and Hard-of-Hearing Children with Virtual Reality Technology.
Passig, D; Eden, S
2000-01-01
Many researchers have found that for reasoning and reaching a reasoned conclusion, particularly when the process of induction is required, deaf and hard-of-hearing children have unusual difficulty. The purpose of this study was to investigate whether the practice of rotating virtual reality (VR) three-dimensional (3D) objects will have a positive effect on the ability of deaf and hard-of-hearing children to use inductive processes when dealing with shapes. Three groups were involved in the study: (1) experimental group, which included 21 deaf and hard-of-hearing children, who played a VR 3D game; (2) control group I, which included 23 deaf and hard-of-hearing children, who played a similar two-dimensional (2D) game (not VR game); and (3) control group II of 16 hearing children for whom no intervention was introduced. The results clearly indicate that practicing with VR 3D spatial rotations significantly improved inductive thinking used by the experimental group for shapes as compared with the first control group, who did not significantly improve their performance. Also, prior to the VR 3D experience, the deaf and hard-of-hearing children attained lower scores in inductive abilities than the children with normal hearing, (control group II). The results for the experimental group, after the VR 3D experience, improved to the extent that there was no noticeable difference between them and the children with normal hearing.
Aided Electrophysiology Using Direct Audio Input: Effects of Amplification and Absolute Signal Level
Billings, Curtis J.; Miller, Christi W.; Tremblay, Kelly L.
2016-01-01
Purpose This study investigated (a) the effect of amplification on cortical auditory evoked potentials (CAEPs) at different signal levels when signal-to-noise ratios (SNRs) were equated between unaided and aided conditions, and (b) the effect of absolute signal level on aided CAEPs when SNR was held constant. Method CAEPs were recorded from 13 young adults with normal hearing. A 1000-Hz pure tone was presented in unaided and aided conditions with a linear analog hearing aid. Direct audio input was used, allowing recorded hearing aid noise floor to be added to unaided conditions to equate SNRs between conditions. An additional stimulus was created through scaling the noise floor to study the effect of signal level. Results Amplification resulted in delayed N1 and P2 peak latencies relative to the unaided condition. An effect of absolute signal level (when SNR was constant) was present for aided CAEP area measures, such that larger area measures were found at higher levels. Conclusion Results of this study further demonstrate that factors in addition to SNR must also be considered before CAEPs can be used to clinically to measure aided thresholds. PMID:26953543
Evaluation of Critical Bandwidth Using Digitally Processed Speech.
1982-05-12
observed after re- peating the two tests on persons with confirmed cases of sensorineural hearing impairment. Again, the plotted speech discrimination...quantifying the critical bandwidth of persons on a cli- nical or pre-employment level. The complex portion of the test design (the computer generation of...34super" normal hearing indi- viduals (i.e., those persons with narrower-than-normal cri- tical bands). This ability of the test shows promise as a valuable
Task-specific reorganization of the auditory cortex in deaf humans
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-01
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964
Task-specific reorganization of the auditory cortex in deaf humans.
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-24
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.
Acquired hearing loss and brain plasticity.
Eggermont, Jos J
2017-01-01
Acquired hearing loss results in an imbalance of the cochlear output across frequency. Central auditory system homeostatic processes responding to this result in frequency specific gain changes consequent to the emerging imbalance between excitation and inhibition. Several consequences thereof are increased spontaneous firing rates, increased neural synchrony, and (in adults) potentially restricted to the auditory thalamus and cortex a reorganization of tonotopic areas. It does not seem to matter much whether the hearing loss is acquired neonatally or in adulthood. In humans, no clear evidence of tonotopic map changes with hearing loss has so far been provided, but frequency specific gain changes are well documented. Unilateral hearing loss in addition makes brain activity across hemispheres more symmetrical and more synchronous. Molecular studies indicate that in the brainstem, after 2-5 days post trauma, the glutamatergic activity is reduced, whereas glycinergic and GABAergic activity is largely unchanged. At 2 months post trauma, excitatory activity remains decreased but the inhibitory one is significantly increased. In contrast protein assays related to inhibitory transmission are all decreased or unchanged in the brainstem, midbrain and auditory cortex. Comparison of neurophysiological data with the molecular findings during a time-line of changes following noise trauma suggests that increases in spontaneous firing rates are related to decreases in inhibition, and not to increases in excitation. Because noise-induced hearing loss in cats resulted in a loss of cortical temporal processing capabilities, this may also underlie speech understanding in humans. Copyright © 2016 Elsevier B.V. All rights reserved.
Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H
2015-01-01
Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.
Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J.; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M.; Lenarz, Thomas; Lim, Hubert H.
2015-01-01
Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus. PMID:26046763
ERIC Educational Resources Information Center
Ricketts, Todd A.; Dittberner, Andrew B.; Johnson, Earl E.
2008-01-01
Purpose: One factor that has been shown to greatly affect sound quality is audible bandwidth. Provision of gain for frequencies above 4-6 kHz has not generally been supported for groups of hearing aid wearers. The purpose of this study was to determine if preference for bandwidth extension in hearing aid processed sounds was related to the…
Hutter, E; Grapp, M; Argstatter, H
2016-12-01
People with severe hearing impairments and deafness can achieve good speech comprehension using a cochlear implant (CI), although music perception often remains impaired. A novel concept of music therapy for adults with CI was developed and evaluated in this study. This study included 30 adults with a unilateral CI following postlingual deafness. The subjective sound quality of the CI was rated using the hearing implant sound quality index (HISQUI) and musical tests for pitch discrimination, melody recognition and timbre identification were applied. As a control 55 normally hearing persons also completed the musical tests. In comparison to normally hearing subjects CI users showed deficits in the perception of pitch, melody and timbre. Specific effects of therapy were observed in the subjective sound quality of the CI, in pitch discrimination into a high and low pitch range and in timbre identification, while general learning effects were found in melody recognition. Music perception shows deficits in CI users compared to normally hearing persons. After individual music therapy in the rehabilitation process, improvements in this delicate area could be achieved.
2016-01-01
People with hearing impairment are thought to rely heavily on context to compensate for reduced audibility. Here, we explore the resulting cost of this compensatory behavior, in terms of effort and the efficiency of ongoing predictive language processing. The listening task featured predictable or unpredictable sentences, and participants included people with cochlear implants as well as people with normal hearing who heard full-spectrum/unprocessed or vocoded speech. The crucial metric was the growth of the pupillary response and the reduction of this response for predictable versus unpredictable sentences, which would suggest reduced cognitive load resulting from predictive processing. Semantic context led to rapid reduction of listening effort for people with normal hearing; the reductions were observed well before the offset of the stimuli. Effort reduction was slightly delayed for people with cochlear implants and considerably more delayed for normal-hearing listeners exposed to spectrally degraded noise-vocoded signals; this pattern of results was maintained even when intelligibility was perfect. Results suggest that speed of sentence processing can still be disrupted, and exertion of effort can be elevated, even when intelligibility remains high. We discuss implications for experimental and clinical assessment of speech recognition, in which good performance can arise because of cognitive processes that occur after a stimulus, during a period of silence. Because silent gaps are not common in continuous flowing speech, the cognitive/linguistic restorative processes observed after sentences in such studies might not be available to listeners in everyday conversations, meaning that speech recognition in conventional tests might overestimate sentence-processing capability. PMID:27698260
Dynamic Divisive Normalization Predicts Time-Varying Value Coding in Decision-Related Circuits
LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W.
2014-01-01
Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. PMID:25429145
Low Empathy in Deaf and Hard of Hearing (Pre)Adolescents Compared to Normal Hearing Controls
Netten, Anouk P.; Rieffe, Carolien; Theunissen, Stephanie C. P. M.; Soede, Wim; Dirks, Evelien; Briaire, Jeroen J.; Frijns, Johan H. M.
2015-01-01
Objective The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. Methods The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children’s level of empathy, their attendance to others’ emotions, emotion recognition, and supportive behavior. Results Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children. Conclusions Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships. PMID:25906365
Reduced volume of Heschl's gyrus in tinnitus.
Schneider, Peter; Andermann, Martin; Wengenroth, Martina; Goebel, Rainer; Flor, Herta; Rupp, André; Diesch, Eugen
2009-04-15
The neural basis of tinnitus is unknown. Recent neuroimaging studies point towards involvement of several cortical and subcortical regions. Here we demonstrate that tinnitus may be associated with structural changes in the auditory cortex. Using individual morphological segmentation, the medial partition of Heschl's gyrus (mHG) was studied in individuals with and without chronic tinnitus using magnetic resonance imaging. Both the tinnitus and the non-tinnitus group included musicians and non-musicians. Patients exhibited significantly smaller mHG gray matter volumes than controls. In unilateral tinnitus, this effect was almost exclusively seen in the hemisphere ipsilateral to the affected ear. In bilateral tinnitus, mHG volume was substantially reduced in both hemispheres. The tinnitus-related volume reduction was found across the full extent of mHG, not only in the high-frequency part usually most affected by hearing loss-induced deafferentation. However, there was also evidence for a relationship between volume reduction and hearing loss. Correlations between volume and hearing level depended on the subject group as well as the asymmetry of the hearing loss. The volume changes observed may represent antecedents or consequences of tinnitus and tinnitus-associated hearing loss and also raise the possibility that small cortical volume constitutes a vulnerability factor.
Evaluation of Extended-wear Hearing Aid Technology for Operational Military Use
2017-07-01
for a transparent hearing protection device that could protect the hearing of normal-hearing listeners without degrading auditory situational...method, suggest that continuous noise protection is also comparable to conventional earplug devices. Behavioral testing on listeners with normal...associated with the extended-wear hearing aid could be adapted to provide long-term hearing protection for listeners with normal hearing with minimal
Central auditory processing effects induced by solvent exposure.
Fuente, Adrian; McPherson, Bradley
2007-01-01
Various studies have demonstrated that organic solvent exposure may induce auditory damage. Studies conducted in workers occupationally exposed to solvents suggest, on the one hand, poorer hearing thresholds than in matched non-exposed workers, and on the other hand, central auditory damage due to solvent exposure. Taking into account the potential auditory damage induced by solvent exposure due to the neurotoxic properties of such substances, the present research aimed at studying the possible auditory processing disorder (APD), and possible hearing difficulties in daily life listening situations that solvent-exposed workers may acquire. Fifty workers exposed to a mixture of organic solvents (xylene, toluene, methyl ethyl ketone) and 50 non-exposed workers matched by age, gender and education were assessed. Only subjects with no history of ear infections, high blood pressure, kidney failure, metabolic and neurological diseases, or alcoholism were selected. The subjects had either normal hearing or sensorineural hearing loss, and normal tympanometric results. Hearing-in-noise (HINT), dichotic digit (DD), filtered speech (FS), pitch pattern sequence (PPS), and random gap detection (RGD) tests were carried out in the exposed and non-exposed groups. A self-report inventory of each subject's performance in daily life listening situations, the Amsterdam Inventory for Auditory Disability and Handicap, was also administered. Significant threshold differences between exposed and non-exposed workers were found at some of the hearing test frequencies, for both ears. However, exposed workers still presented normal hearing thresholds as a group (equal or better than 20 dB HL). Also, for the HINT, DD, PPS, FS and RGD tests, non-exposed workers obtained better results than exposed workers. Finally, solvent-exposed workers reported significantly more hearing complaints in daily life listening situations than non-exposed workers. It is concluded that subjects exposed to solvents may acquire an APD and thus the sole use of pure-tone audiometry is insufficient to assess hearing in solvent-exposed populations.
Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela
2015-07-01
Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.
Speech perception in noise in unilateral hearing loss.
Mondelli, Maria Fernanda Capoani Garcia; Dos Santos, Marina de Marchi; José, Maria Renata
2016-01-01
Unilateral hearing loss is characterized by a decrease of hearing in one ear only. In the presence of ambient noise, individuals with unilateral hearing loss are faced with greater difficulties understanding speech than normal listeners. To evaluate the speech perception of individuals with unilateral hearing loss in speech perception with and without competitive noise, before and after the hearing aid fitting process. The study included 30 adults of both genders diagnosed with moderate or severe sensorineural unilateral hearing loss using the Hearing In Noise Test - Hearing In Noise Test-Brazil, in the following scenarios: silence, frontal noise, noise to the right, and noise to the left, before and after the hearing aid fitting process. The study participants had a mean age of 41.9 years and most of them presented right unilateral hearing loss. In all cases evaluated with Hearing In Noise Test, a better performance in speech perception was observed with the use of hearing aids. Using the Hearing In Noise Test-Brazil test evaluation, individuals with unilateral hearing loss demonstrated better performance in speech perception when using hearing aids, both in silence and in situations with a competing noise, with use of hearing aids. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akpinar, Berkcan; Mousavi, Seyed H., E-mail: mousavish@upmc.edu; McDowell, Michael M.
Purpose: Vestibular schwannomas (VS) are increasingly diagnosed in patients with normal hearing because of advances in magnetic resonance imaging. We sought to evaluate whether stereotactic radiosurgery (SRS) performed earlier after diagnosis improved long-term hearing preservation in this population. Methods and Materials: We queried our quality assessment registry and found the records of 1134 acoustic neuroma patients who underwent SRS during a 15-year period (1997-2011). We identified 88 patients who had VS but normal hearing with no subjective hearing loss at the time of diagnosis. All patients were Gardner-Robertson (GR) class I at the time of SRS. Fifty-seven patients underwent earlymore » (≤2 years from diagnosis) SRS and 31 patients underwent late (>2 years after diagnosis) SRS. At a median follow-up time of 75 months, we evaluated patient outcomes. Results: Tumor control rates (decreased or stable in size) were similar in the early (95%) and late (90%) treatment groups (P=.73). Patients in the early treatment group retained serviceable (GR class I/II) hearing and normal (GR class I) hearing longer than did patients in the late treatment group (serviceable hearing, P=.006; normal hearing, P<.0001, respectively). At 5 years after SRS, an estimated 88% of the early treatment group retained serviceable hearing and 77% retained normal hearing, compared with 55% with serviceable hearing and 33% with normal hearing in the late treatment group. Conclusions: SRS within 2 years after diagnosis of VS in normal hearing patients resulted in improved retention of all hearing measures compared with later SRS.« less
2013-01-01
Background Language comprehension requires decoding of complex, rapidly changing speech streams. Detecting changes of frequency modulation (FM) within speech is hypothesized as essential for accurate phoneme detection, and thus, for spoken word comprehension. Despite past demonstration of FM auditory evoked response (FMAER) utility in language disorder investigations, it is seldom utilized clinically. This report's purpose is to facilitate clinical use by explaining analytic pitfalls, demonstrating sites of cortical origin, and illustrating potential utility. Results FMAERs collected from children with language disorders, including Developmental Dysphasia, Landau-Kleffner syndrome (LKS), and autism spectrum disorder (ASD) and also normal controls - utilizing multi-channel reference-free recordings assisted by discrete source analysis - provided demonstratrions of cortical origin and examples of clinical utility. Recordings from inpatient epileptics with indwelling cortical electrodes provided direct assessment of FMAER origin. The FMAER is shown to normally arise from bilateral posterior superior temporal gyri and immediate temporal lobe surround. Childhood language disorders associated with prominent receptive deficits demonstrate absent left or bilateral FMAER temporal lobe responses. When receptive language is spared, the FMAER may remain present bilaterally. Analyses based upon mastoid or ear reference electrodes are shown to result in erroneous conclusions. Serial FMAER studies may dynamically track status of underlying language processing in LKS. FMAERs in ASD with language impairment may be normal or abnormal. Cortical FMAERs can locate language cortex when conventional cortical stimulation does not. Conclusion The FMAER measures the processing by the superior temporal gyri and adjacent cortex of rapid frequency modulation within an auditory stream. Clinical disorders associated with receptive deficits are shown to demonstrate absent left or bilateral responses. Serial FMAERs may be useful for tracking language change in LKS. Cortical FMAERs may augment invasive cortical language testing in epilepsy surgical patients. The FMAER may be normal in ASD and other language disorders when pathology spares the superior temporal gyrus and surround but presumably involves other brain regions. Ear/mastoid reference electrodes should be avoided and multichannel, reference free recordings utilized. Source analysis may assist in better understanding of complex FMAER findings. PMID:23351174
Cortical activity patterns predict speech discrimination ability
Engineer, Crystal T; Perez, Claudia A; Chen, YeTing H; Carraway, Ryan S; Reed, Amanda C; Shetake, Jai A; Jakkamsetti, Vikram; Chang, Kevin Q; Kilgard, Michael P
2010-01-01
Neural activity in the cerebral cortex can explain many aspects of sensory perception. Extensive psychophysical and neurophysiological studies of visual motion and vibrotactile processing show that the firing rate of cortical neurons averaged across 50–500 ms is well correlated with discrimination ability. In this study, we tested the hypothesis that primary auditory cortex (A1) neurons use temporal precision on the order of 1–10 ms to represent speech sounds shifted into the rat hearing range. Neural discrimination was highly correlated with behavioral performance on 11 consonant-discrimination tasks when spike timing was preserved and was not correlated when spike timing was eliminated. This result suggests that spike timing contributes to the auditory cortex representation of consonant sounds. PMID:18425123
Grossberg, Stephen
2017-03-01
The hard problem of consciousness is the problem of explaining how we experience qualia or phenomenal experiences, such as seeing, hearing, and feeling, and knowing what they are. To solve this problem, a theory of consciousness needs to link brain to mind by modeling how emergent properties of several brain mechanisms interacting together embody detailed properties of individual conscious psychological experiences. This article summarizes evidence that Adaptive Resonance Theory, or ART, accomplishes this goal. ART is a cognitive and neural theory of how advanced brains autonomously learn to attend, recognize, and predict objects and events in a changing world. ART has predicted that "all conscious states are resonant states" as part of its specification of mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony. It hereby provides functional and mechanistic explanations of data ranging from individual spikes and their synchronization to the dynamics of conscious perceptual, cognitive, and cognitive-emotional experiences. ART has reached sufficient maturity to begin classifying the brain resonances that support conscious experiences of seeing, hearing, feeling, and knowing. Psychological and neurobiological data in both normal individuals and clinical patients are clarified by this classification. This analysis also explains why not all resonances become conscious, and why not all brain dynamics are resonant. The global organization of the brain into computationally complementary cortical processing streams (complementary computing), and the organization of the cerebral cortex into characteristic layers of cells (laminar computing), figure prominently in these explanations of conscious and unconscious processes. Alternative models of consciousness are also discussed. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
Klinke, R; Kral, A; Heid, S; Tillein, J; Hartmann, R
1999-09-10
In congenitally deaf cats, the central auditory system is deprived of acoustic input because of degeneration of the organ of Corti before the onset of hearing. Primary auditory afferents survive and can be stimulated electrically. By means of an intracochlear implant and an accompanying sound processor, congenitally deaf kittens were exposed to sounds and conditioned to respond to tones. After months of exposure to meaningful stimuli, the cortical activity in chronically implanted cats produced field potentials of higher amplitudes, expanded in area, developed long latency responses indicative of intracortical information processing, and showed more synaptic efficacy than in naïve, unstimulated deaf cats. The activity established by auditory experience resembles activity in hearing animals.
How age affects memory task performance in clinically normal hearing persons.
Vercammen, Charlotte; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid
2017-05-01
The main objective of this study is to investigate memory task performance in different age groups, irrespective of hearing status. Data are collected on a short-term memory task (WAIS-III Digit Span forward) and two working memory tasks (WAIS-III Digit Span backward and the Reading Span Test). The tasks are administered to young (20-30 years, n = 56), middle-aged (50-60 years, n = 47), and older participants (70-80 years, n = 16) with normal hearing thresholds. All participants have passed a cognitive screening task (Montreal Cognitive Assessment (MoCA)). Young participants perform significantly better than middle-aged participants, while middle-aged and older participants perform similarly on the three memory tasks. Our data show that older clinically normal hearing persons perform equally well on the memory tasks as middle-aged persons. However, even under optimal conditions of preserved sensory processing, changes in memory performance occur. Based on our data, these changes set in before middle age.
Füllgrabe, Christian; Rosen, Stuart
2016-01-01
With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in understanding speech in noise (SiN). The psychological construct that has received most interest is working memory (WM), representing the ability to simultaneously store and process information. Common lore and theoretical models assume that WM-based processes subtend speech processing in adverse perceptual conditions, such as those associated with hearing loss or background noise. Empirical evidence confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. To assess whether WMC also plays a role when listeners without hearing loss process speech in acoustically adverse conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification. The survey revealed little or no evidence for an association between WMC and SiN performance. We also analysed new data from 132 normal-hearing participants sampled from across the adult lifespan (18-91 years), for a relationship between Reading-Span scores and identification of matrix sentences in noise. Performance on both tasks declined with age, and correlated weakly even after controlling for the effects of age and audibility (r = 0.39, p ≤ 0.001, one-tailed). However, separate analyses for different age groups revealed that the correlation was only significant for middle-aged and older groups but not for the young (< 40 years) participants.
Nisha, Kavassery Venkateswaran; Kumar, Ajith Uppunda
2017-04-01
Localization involves processing of subtle yet highly enriched monaural and binaural spatial cues. Remediation programs aimed at resolving spatial deficits are surprisingly scanty in literature. The present study is designed to explore the changes that occur in the spatial performance of normal-hearing listeners before and after subjecting them to virtual acoustic space (VAS) training paradigm using behavioral and electrophysiological measures. Ten normal-hearing listeners participated in the study, which was conducted in three phases, including a pre-training, training, and post-training phase. At the pre- and post-training phases both behavioral measures of spatial acuity and electrophysiological P300 were administered. The spatial acuity of the participants in the free field and closed field were measured apart from quantifying their binaural processing abilities. The training phase consisted of 5-8 sessions (20 min each) carried out using a hierarchy of graded VAS stimuli. The results obtained from descriptive statistics were indicative of an improvement in all the spatial acuity measures in the post-training phase. Statistically, significant changes were noted in interaural time difference (ITD) and virtual acoustic space identification scores measured in the post-training phase. Effect sizes (r) for all of these measures were substantially large, indicating the clinical relevance of these measures in documenting the impact of training. However, the same was not reflected in P300. The training protocol used in the present study on a preliminary basis proves to be effective in normal-hearing listeners, and its implications can be extended to other clinical population as well.
Auditory Perceptual Learning in Adults with and without Age-Related Hearing Loss
Karawani, Hanin; Bitan, Tali; Attias, Joseph; Banai, Karen
2016-01-01
Introduction : Speech recognition in adverse listening conditions becomes more difficult as we age, particularly for individuals with age-related hearing loss (ARHL). Whether these difficulties can be eased with training remains debated, because it is not clear whether the outcomes are sufficiently general to be of use outside of the training context. The aim of the current study was to compare training-induced learning and generalization between normal-hearing older adults and those with ARHL. Methods : Fifty-six listeners (60–72 y/o), 35 participants with ARHL, and 21 normal hearing adults participated in the study. The study design was a cross over design with three groups (immediate-training, delayed-training, and no-training group). Trained participants received 13 sessions of home-based auditory training over the course of 4 weeks. Three adverse listening conditions were targeted: (1) Speech-in-noise, (2) time compressed speech, and (3) competing speakers, and the outcomes of training were compared between normal and ARHL groups. Pre- and post-test sessions were completed by all participants. Outcome measures included tests on all of the trained conditions as well as on a series of untrained conditions designed to assess the transfer of learning to other speech and non-speech conditions. Results : Significant improvements on all trained conditions were observed in both ARHL and normal-hearing groups over the course of training. Normal hearing participants learned more than participants with ARHL in the speech-in-noise condition, but showed similar patterns of learning in the other conditions. Greater pre- to post-test changes were observed in trained than in untrained listeners on all trained conditions. In addition, the ability of trained listeners from the ARHL group to discriminate minimally different pseudowords in noise also improved with training. Conclusions : ARHL did not preclude auditory perceptual learning but there was little generalization to untrained conditions. We suggest that most training-related changes occurred at higher level task-specific cognitive processes in both groups. However, these were enhanced by high quality perceptual representations in the normal-hearing group. In contrast, some training-related changes have also occurred at the level of phonemic representations in the ARHL group, consistent with an interaction between bottom-up and top-down processes. PMID:26869944
Formby, Craig; Korczak, Peggy; Sherlock, LaGuinn P; Hawley, Monica L; Gold, Susan
2017-02-01
In this report of three cases, we consider electrophysiologic measures from three hyperacusic hearing-impaired individuals who, prior to treatment to expand their dynamic ranges for loudness, were problematic hearing aid candidates because of their diminished sound tolerance and reduced dynamic ranges. Two of these individuals were treated with structured counseling combined with low-level broadband sound therapy from bilateral sound generators and the third case received structured counseling in combination with a short-acting placebo sound therapy. Each individual was highly responsive to his or her assigned treatment as revealed by expansion of the dynamic range by at least 20 dB at one or more frequencies posttreatment. Of specific interest in this report are their latency and amplitude measures taken from tone burst-evoked auditory brainstem response (ABR) and cortically derived middle latency response (MLR) recordings, measured as a function of increasing loudness at 500 and 2,000 Hz pre- and posttreatment. The resulting ABR and MLR latency and amplitude measures for each case are considered here in terms of pre- and posttreatment predictions. The respective pre- and posttreatment predictions anticipated larger pretreatment response amplitudes and shorter pretreatment response latencies relative to typical normal control values and smaller normative-like posttreatment response amplitudes and longer posttreatment response latencies relative to the corresponding pretreatment values for each individual. From these results and predictions, we conjecture about the neural origins of the hyperacusis conditions (i.e., brainstem versus cortical) and the neuronal sites responsive to treatment. The only consistent finding in support of the pre- and posttreatment predictions and, thus, the strongest index of hyperacusis and positive treatment-related effects was measured for MLR latency responses for wave Pa at 2,000 Hz. Other response indices, including ABR wave V latency and wave V-V' amplitude and MLR wave Na-Pa amplitude for 500 and 2,000 Hz, appear either ambiguous across and/or within these individuals. Notwithstanding significant challenges for interpreting these findings, including associated confounding effects of their sensorineural hearing losses and differences in the presentation levels of the toneburst stimuli used to collect these measures for each individual, our limited analyses of three cases suggest measures of MLR wave Pa latency at 2,000 Hz (reflecting cortical contributions) may be a promising objective indicator of hyperacusis and dynamic range expansion treatment effects.
Integrating cognitive and peripheral factors in predicting hearing-aid processing effectiveness
Kates, James M.; Arehart, Kathryn H.; Souza, Pamela E.
2013-01-01
Individual factors beyond the audiogram, such as age and cognitive abilities, can influence speech intelligibility and speech quality judgments. This paper develops a neural network framework for combining multiple subject factors into a single model that predicts speech intelligibility and quality for a nonlinear hearing-aid processing strategy. The nonlinear processing approach used in the paper is frequency compression, which is intended to improve the audibility of high-frequency speech sounds by shifting them to lower frequency regions where listeners with high-frequency loss have better hearing thresholds. An ensemble averaging approach is used for the neural network to avoid the problems associated with overfitting. Models are developed for two subject groups, one having nearly normal hearing and the other mild-to-moderate sloping losses. PMID:25669257
Huber, Rainer; Meis, Markus; Klink, Karin; Bartsch, Christian; Bitzer, Joerg
2014-01-01
Within the Lower Saxony Research Network Design of Environments for Ageing (GAL), a personal activity and household assistant (PAHA), an ambient reminder system, has been developed. One of its central output modality to interact with the user is sound. The study presented here evaluated three different system technologies for sound reproduction using up to five loudspeakers, including the "phantom source" concept. Moreover, a technology for hearing loss compensation for the mostly older users of the PAHA was implemented and evaluated. Evaluation experiments with 21 normal hearing and hearing impaired test subjects were carried out. The results show that after direct comparison of the sound presentation concepts, the presentation by the single TV speaker was most preferred, whereas the phantom source concept got the highest acceptance ratings as far as the general concept is concerned. The localization accuracy of the phantom source concept was good as long as the exact listening position was known to the algorithm and speech stimuli were used. Most subjects preferred the original signals over the pre-processed, dynamic-compressed signals, although processed speech was often described as being clearer.
Spectral context affects temporal processing in awake auditory cortex
Beitel, Ralph E.; Vollmer, Maike; Heiser, Marc A; Schreiner, Christoph E.
2013-01-01
Amplitude modulation encoding is critical for human speech perception and complex sound processing in general. The modulation transfer function (MTF) is a staple of auditory psychophysics, and has been shown to predict speech intelligibility performance in a range of adverse listening conditions and hearing impairments, including cochlear implant-supported hearing. Although both tonal and broadband carriers have been employed in psychophysical studies of modulation detection and discrimination, relatively little is known about differences in the cortical representation of such signals. We obtained MTFs in response to sinusoidal amplitude modulation (SAM) for both narrowband tonal carriers and 2-octave bandwidth noise carriers in the auditory core of awake squirrel monkeys. MTFs spanning modulation frequencies from 4 to 512 Hz were obtained using 16 channel linear recording arrays sampling across all cortical laminae. Carrier frequency for tonal SAM and center frequency for noise SAM was set at the estimated best frequency for each penetration. Changes in carrier type affected both rate and temporal MTFs in many neurons. Using spike discrimination techniques, we found that discrimination of modulation frequency was significantly better for tonal SAM than for noise SAM, though the differences were modest at the population level. Moreover, spike trains elicited by tonal and noise SAM could be readily discriminated in most cases. Collectively, our results reveal remarkable sensitivity to the spectral content of modulated signals, and indicate substantial interdependence between temporal and spectral processing in neurons of the core auditory cortex. PMID:23719811
A speech processing study using an acoustic model of a multiple-channel cochlear implant
NASA Astrophysics Data System (ADS)
Xu, Ying
1998-10-01
A cochlear implant is an electronic device designed to provide sound information for adults and children who have bilateral profound hearing loss. The task of representing speech signals as electrical stimuli is central to the design and performance of cochlear implants. Studies have shown that the current speech- processing strategies provide significant benefits to cochlear implant users. However, the evaluation and development of speech-processing strategies have been complicated by hardware limitations and large variability in user performance. To alleviate these problems, an acoustic model of a cochlear implant with the SPEAK strategy is implemented in this study, in which a set of acoustic stimuli whose psychophysical characteristics are as close as possible to those produced by a cochlear implant are presented on normal-hearing subjects. To test the effectiveness and feasibility of this acoustic model, a psychophysical experiment was conducted to match the performance of a normal-hearing listener using model- processed signals to that of a cochlear implant user. Good agreement was found between an implanted patient and an age-matched normal-hearing subject in a dynamic signal discrimination experiment, indicating that this acoustic model is a reasonably good approximation of a cochlear implant with the SPEAK strategy. The acoustic model was then used to examine the potential of the SPEAK strategy in terms of its temporal and frequency encoding of speech. It was hypothesized that better temporal and frequency encoding of speech can be accomplished by higher stimulation rates and a larger number of activated channels. Vowel and consonant recognition tests were conducted on normal-hearing subjects using speech tokens processed by the acoustic model, with different combinations of stimulation rate and number of activated channels. The results showed that vowel recognition was best at 600 pps and 8 activated channels, but further increases in stimulation rate and channel numbers were not beneficial. Manipulations of stimulation rate and number of activated channels did not appreciably affect consonant recognition. These results suggest that overall speech performance may improve by appropriately increasing stimulation rate and number of activated channels. Future revision of this acoustic model is necessary to provide more accurate amplitude representation of speech.
Bidelman, Gavin M.; Heinz, Michael G.
2011-01-01
Human listeners prefer consonant over dissonant musical intervals and the perceived contrast between these classes is reduced with cochlear hearing loss. Population-level activity of normal and impaired model auditory-nerve (AN) fibers was examined to determine (1) if peripheral auditory neurons exhibit correlates of consonance and dissonance and (2) if the reduced perceptual difference between these qualities observed for hearing-impaired listeners can be explained by impaired AN responses. In addition, acoustical correlates of consonance-dissonance were also explored including periodicity and roughness. Among the chromatic pitch combinations of music, consonant intervals∕chords yielded more robust neural pitch-salience magnitudes (determined by harmonicity∕periodicity) than dissonant intervals∕chords. In addition, AN pitch-salience magnitudes correctly predicted the ordering of hierarchical pitch and chordal sonorities described by Western music theory. Cochlear hearing impairment compressed pitch salience estimates between consonant and dissonant pitch relationships. The reduction in contrast of neural responses following cochlear hearing loss may explain the inability of hearing-impaired listeners to distinguish musical qualia as clearly as normal-hearing individuals. Of the neural and acoustic correlates explored, AN pitch salience was the best predictor of behavioral data. Results ultimately show that basic pitch relationships governing music are already present in initial stages of neural processing at the AN level. PMID:21895089
Ross, Bernhard; Miyazaki, Takahiro; Thompson, Jessica; Jamali, Shahab; Fujioka, Takako
2014-10-15
When two tones with slightly different frequencies are presented to both ears, they interact in the central auditory system and induce the sensation of a beating sound. At low difference frequencies, we perceive a single sound, which is moving across the head between the left and right ears. The percept changes to loudness fluctuation, roughness, and pitch with increasing beat rate. To examine the neural representations underlying these different perceptions, we recorded neuromagnetic cortical responses while participants listened to binaural beats at a continuously varying rate between 3 Hz and 60 Hz. Binaural beat responses were analyzed as neuromagnetic oscillations following the trajectory of the stimulus rate. Responses were largest in the 40-Hz gamma range and at low frequencies. Binaural beat responses at 3 Hz showed opposite polarity in the left and right auditory cortices. We suggest that this difference in polarity reflects the opponent neural population code for representing sound location. Binaural beats at any rate induced gamma oscillations. However, the responses were largest at 40-Hz stimulation. We propose that the neuromagnetic gamma oscillations reflect postsynaptic modulation that allows for precise timing of cortical neural firing. Systematic phase differences between bilateral responses suggest that separate sound representations of a sound object exist in the left and right auditory cortices. We conclude that binaural processing at the cortical level occurs with the same temporal acuity as monaural processing whereas the identification of sound location requires further interpretation and is limited by the rate of object representations. Copyright © 2014 the American Physiological Society.
Hopkins, Kathryn; King, Andrew; Moore, Brian C J
2012-09-01
Hearing aids use amplitude compression to compensate for the effects of loudness recruitment. The compression speed that gives the best speech intelligibility varies among individuals. Moore [(2008). Trends Amplif. 12, 300-315] suggested that an individual's sensitivity to temporal fine structure (TFS) information may affect which compression speed gives most benefit. This hypothesis was tested using normal-hearing listeners with a simulated hearing loss. Sentences in a competing talker background were processed using multi-channel fast or slow compression followed by a simulation of threshold elevation and loudness recruitment. Signals were either tone vocoded with 1-ERB(N)-wide channels (where ERB(N) is the bandwidth of normal auditory filters) to remove the original TFS information, or not processed further. In a second experiment, signals were vocoded with either 1 - or 2-ERB(N)-wide channels, to test whether the available spectral detail affects the optimal compression speed. Intelligibility was significantly better for fast than slow compression regardless of vocoder channel bandwidth. The results suggest that the availability of original TFS or detailed spectral information does not affect the optimal compression speed. This conclusion is tentative, since while the vocoder processing removed the original TFS information, listeners may have used the altered TFS in the vocoded signals.
Fitting and verification of frequency modulation systems on children with normal hearing.
Schafer, Erin C; Bryant, Danielle; Sanders, Katie; Baldus, Nicole; Algier, Katherine; Lewis, Audrey; Traber, Jordan; Layden, Paige; Amin, Aneeqa
2014-06-01
Several recent investigations support the use of frequency modulation (FM) systems in children with normal hearing and auditory processing or listening disorders such as those diagnosed with auditory processing disorders, autism spectrum disorders, attention-deficit hyperactivity disorder, Friedreich ataxia, and dyslexia. The American Academy of Audiology (AAA) published suggested procedures, but these guidelines do not cite research evidence to support the validity of the recommended procedures for fitting and verifying nonoccluding open-ear FM systems on children with normal hearing. Documenting the validity of these fitting procedures is critical to maximize the potential FM-system benefit in the above-mentioned populations of children with normal hearing and those with auditory-listening problems. The primary goal of this investigation was to determine the validity of the AAA real-ear approach to fitting FM systems on children with normal hearing. The secondary goal of this study was to examine speech-recognition performance in noise and loudness ratings without and with FM systems in children with normal hearing sensitivity. A two-group, cross-sectional design was used in the present study. Twenty-six typically functioning children, ages 5-12 yr, with normal hearing sensitivity participated in the study. Participants used a nonoccluding open-ear FM receiver during laboratory-based testing. Participants completed three laboratory tests: (1) real-ear measures, (2) speech recognition performance in noise, and (3) loudness ratings. Four real-ear measures were conducted to (1) verify that measured output met prescribed-gain targets across the 1000-4000 Hz frequency range for speech stimuli, (2) confirm that the FM-receiver volume did not exceed predicted uncomfortable loudness levels, and (3 and 4) measure changes to the real-ear unaided response when placing the FM receiver in the child's ear. After completion of the fitting, speech recognition in noise at a -5 signal-to-noise ratio and loudness ratings at a +5 signal-to-noise ratio were measured in four conditions: (1) no FM system, (2) FM receiver on the right ear, (3) FM receiver on the left ear, and (4) bilateral FM system. The results of this study suggested that the slightly modified AAA real-ear measurement procedures resulted in a valid fitting of one FM system on children with normal hearing. On average, prescriptive targets were met for 1000, 2000, 3000, and 4000 Hz within 3 dB, and maximum output of the FM system never exceeded and was significantly lower than predicted uncomfortable loudness levels for the children. There was a minimal change in the real-ear unaided response when the open-ear FM receiver was placed into the ear. Use of the FM system on one or both ears resulted in significantly better speech recognition in noise relative to a no-FM condition, and the unilateral and bilateral FM receivers resulted in a comfortably loud signal when listening in background noise. Real-ear measures are critical for obtaining an appropriate fit of an FM system on children with normal hearing. American Academy of Audiology.
Voice gender identification by cochlear implant users: The role of spectral and temporal resolution
NASA Astrophysics Data System (ADS)
Fu, Qian-Jie; Chinchilla, Sherol; Nogaki, Geraldine; Galvin, John J.
2005-09-01
The present study explored the relative contributions of spectral and temporal information to voice gender identification by cochlear implant users and normal-hearing subjects. Cochlear implant listeners were tested using their everyday speech processors, while normal-hearing subjects were tested under speech processing conditions that simulated various degrees of spectral resolution, temporal resolution, and spectral mismatch. Voice gender identification was tested for two talker sets. In Talker Set 1, the mean fundamental frequency values of the male and female talkers differed by 100 Hz while in Talker Set 2, the mean values differed by 10 Hz. Cochlear implant listeners achieved higher levels of performance with Talker Set 1, while performance was significantly reduced for Talker Set 2. For normal-hearing listeners, performance was significantly affected by the spectral resolution, for both Talker Sets. With matched speech, temporal cues contributed to voice gender identification only for Talker Set 1 while spectral mismatch significantly reduced performance for both Talker Sets. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to 4-8 spectral channels. The results suggest that, because of the reduced spectral resolution, cochlear implant patients may attend strongly to periodicity cues to distinguish voice gender.
Dynamic divisive normalization predicts time-varying value coding in decision-related circuits.
Louie, Kenway; LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W
2014-11-26
Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. Copyright © 2014 the authors 0270-6474/14/3416046-12$15.00/0.
Tolerable hearing aid delays. V. Estimation of limits for open canal fittings.
Stone, Michael A; Moore, Brian C J; Meisenbacher, Katrin; Derleth, Ralph P
2008-08-01
Open canal fittings are a popular alternative to close-fitting earmolds for use with patients whose low-frequency hearing is near normal. Open canal fittings reduce the occlusion effect but also provide little attenuation of external air-borne sounds. The wearer therefore receives a mixture of air-borne sound and amplified but delayed sound through the hearing aid. To explore systematically the effect of the mixing, we simulated with varying degrees of complexity the effects of both a hearing loss and a high-quality hearing aid programmed to compensate for that loss, and used normal-hearing participants to assess the processing. The off-line processing was intended to simulate the percept of listening to the speech of a single (external) talker. The effect of introducing a delay on a subjective measure of speech quality (disturbance rating on a scale from 1 to 7, 7 being maximal disturbance) was assessed using both a constant gain and a gain that varied across frequency. In three experiments we assessed the effects of different amounts of delay, maximum aid gain and rate of change of gain with frequency. The simulated hearing aids were chosen to be appropriate for typical mild to moderate high-frequency losses starting at 1 or 2 kHz. Two of the experiments used simulations of linear hearing aids, whereas the third used fast-acting multichannel wide-dynamic-range compression and a simulation of loudness recruitment. In one experiment, a condition was included in which spectral ripples produced by comb-filtering were partially removed using a digital filter. For linear hearing aids, disturbance increased progressively with increasing delay and with decreasing rate of change of gain; the effect of amount of gain was small when the gain varied across frequency. The effect of reducing spectral ripples was also small. When the simulation of dynamic processes was included (experiment 3), the pattern with delay remained similar, but disturbance increased with increasing gain. It is argued that this is mainly due to disturbance increasing with increasing simulated hearing loss, probably because of the dynamic processing involved in the hearing aid and recruitment simulation. A disturbance rating of 3 may be considered as just acceptable. This rating was reached for delays of about 5 and 6 msec, for simulated hearing losses starting at 2 and 1 kHz, respectively. The perceptual effect of reducing the spectral ripples produced by comb-filtering was small; the effect was greatest when the hearing aid gain was small and when the hearing loss started at a low frequency.
Rosemann, Stephanie; Thiel, Christiane M
2018-07-15
Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss. Copyright © 2018 Elsevier Inc. All rights reserved.
McCreery, Ryan W; Stelmachowicz, Patricia G
2013-09-01
Understanding speech in acoustically degraded environments can place significant cognitive demands on school-age children who are developing the cognitive and linguistic skills needed to support this process. Previous studies suggest the speech understanding, word learning, and academic performance can be negatively impacted by background noise, but the effect of limited audibility on cognitive processes in children has not been directly studied. The aim of the present study was to evaluate the impact of limited audibility on speech understanding and working memory tasks in school-age children with normal hearing. Seventeen children with normal hearing between 6 and 12 years of age participated in the present study. Repetition of nonword consonant-vowel-consonant stimuli was measured under conditions with combinations of two different signal to noise ratios (SNRs; 3 and 9 dB) and two low-pass filter settings (3.2 and 5.6 kHz). Verbal processing time was calculated based on the time from the onset of the stimulus to the onset of the child's response. Monosyllabic word repetition and recall were also measured in conditions with a full bandwidth and 5.6 kHz low-pass cutoff. Nonword repetition scores decreased as audibility decreased. Verbal processing time increased as audibility decreased, consistent with predictions based on increased listening effort. Although monosyllabic word repetition did not vary between the full bandwidth and 5.6 kHz low-pass filter condition, recall was significantly poorer in the condition with limited bandwidth (low pass at 5.6 kHz). Age and expressive language scores predicted performance on word recall tasks, but did not predict nonword repetition accuracy or verbal processing time. Decreased audibility was associated with reduced accuracy for nonword repetition and increased verbal processing time in children with normal hearing. Deficits in free recall were observed even under conditions where word repetition was not affected. The negative effects of reduced audibility may occur even under conditions where speech repetition is not impacted. Limited stimulus audibility may result in greater cognitive effort for verbal rehearsal in working memory and may limit the availability of cognitive resources to allocate to working memory and other processes.
Assessment of cortical auditory evoked potentials in children with specific language impairment.
Włodarczyk, Elżbieta; Szkiełkowska, Agata; Pilka, Adam; Skarżyński, Henryk
2018-02-28
The proper course of speech development heavily influences the cognitive and personal development of children. It is a condition for achieving preschool and school successes - it facilitates socializing and expressing feelings and needs. Impairment of language and its development in children represents a major diagnostic and therapeutic challenge for physicians and therapists. Early diagnosis of coexisting deficits and starting the therapy influence the therapeutic success. One of the basic diagnostic tests for children suffering from specific language impairment (SLI) is audiometry, thus far referred to as a hearing test. Auditory processing is just as important as a proper hearing threshold. Therefore, diagnosis of central auditory disorder may be a valuable supplementation of diagnosis of language impairment. Early diagnosis and implementation of appropriate treatment may contribute to an effective language therapy.
ERIC Educational Resources Information Center
Wilson, Richard H.; McArdle, Rachel A.; Smith, Sherri L.
2007-01-01
Purpose: The purpose of this study was to examine in listeners with normal hearing and listeners with sensorineural hearing loss the within- and between-group differences obtained with 4 commonly available speech-in-noise protocols. Method: Recognition performances by 24 listeners with normal hearing and 72 listeners with sensorineural hearing…
Danermark, B; Antonson, S; Lundström, I
2001-01-01
The aim of this study was to investigate the decision process and to analyse the mechanisms involved in the transition from upper secondary education to post-secondary education or the labour market. Sixteen students with sensorioneural hearing loss were selected. Among these eight of the students continued to university and eight did not. Twenty-five per cent of the students were women and the average age was 28 years. The investigation was conducted about 5 years after graduation from the upper secondary school. Both quantitative and qualitative methods were used. The results showed that none of the students came from a family where any or both of the parents had a university or comparable education. The differences in choice between the two groups cannot be explained in terms of social inheritance. Our study indicates that given normal intellectual capacity the level of the hearing loss seems to have no predictive value regarding future educational performance and academic career. The conclusion is that it is of great importance that a hearing impaired pupil with normal intellectual capacity is encouraged and guided to choose an upper secondary educational programme which is orientated towards post-secondary education (instead of a narrow vocational programme). Additional to their hearing impairment and related educational problems, hard of hearing students have much more difficulty than normal hearing peers in coping with changes in intentions and goals regarding their educational career during their upper secondary education.
Rodríguez-Santos, José Miguel; Calleja, Marina; García-Orza, Javier; Iza, Mauricio; Damas, Jesús
2014-01-01
Deaf children usually achieve lower scores on numerical tasks than normally hearing peers. Explanations for mathematical disabilities in hearing children are based on quantity representation deficits (Geary, 1994) or on deficits in accessing these representations (Rousselle & Noël, 2008). The present study aimed to verify, by means of symbolic (Arabic digits) and nonsymbolic (dot constellations and hands) magnitude comparison tasks, whether deaf children show deficits in representations or in accessing numerical representations. The study participants were 10 prelocutive deaf children and 10 normally hearing children. Numerical distance and magnitude were manipulated. Response time (RT) analysis showed similar magnitude and distance effects in both groups on the 3 tasks. However, slower RTs were observed among the deaf participants on the symbolic task alone. These results suggest that although both groups' quantity representations were similar, the deaf group experienced a delay in accessing representations from symbolic codes.
Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.
Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M
1991-06-01
An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.
Choi, Inyong; Wang, Le; Bharadwaj, Hari; Shinn-Cunningham, Barbara
2014-01-01
Many studies have shown that attention modulates the cortical representation of an auditory scene, emphasizing an attended source while suppressing competing sources. Yet, individual differences in the strength of this attentional modulation and their relationship with selective attention ability are poorly understood. Here, we ask whether differences in how strongly attention modulates cortical responses reflect differences in normal-hearing listeners’ selective auditory attention ability. We asked listeners to attend to one of three competing melodies and identify its pitch contour while we measured cortical electroencephalographic responses. The three melodies were either from widely separated pitch ranges (“easy trials”), or from a narrow, overlapping pitch range (“hard trials”). The melodies started at slightly different times; listeners attended either the leading or lagging melody. Because of the timing of the onsets, the leading melody drew attention exogenously. In contrast, attending the lagging melody required listeners to direct top-down attention volitionally. We quantified how attention amplified auditory N1 response to the attended melody and found large individual differences in the N1 amplification, even though only correctly answered trials were used to quantify the ERP gain. Importantly, listeners with the strongest amplification of N1 response to the lagging melody in the easy trials were the best performers across other types of trials. Our results raise the possibility that individual differences in the strength of top-down gain control reflect inherent differences in the ability to control top-down attention. PMID:24821552
Neural responses to sounds presented on and off the beat of ecologically valid music
Tierney, Adam; Kraus, Nina
2013-01-01
The tracking of rhythmic structure is a vital component of speech and music perception. It is known that sequences of identical sounds can give rise to the percept of alternating strong and weak sounds, and that this percept is linked to enhanced cortical and oscillatory responses. The neural correlates of the perception of rhythm elicited by ecologically valid, complex stimuli, however, remain unexplored. Here we report the effects of a stimulus' alignment with the beat on the brain's processing of sound. Human subjects listened to short popular music pieces while simultaneously hearing a target sound. Cortical and brainstem electrophysiological onset responses to the sound were enhanced when it was presented on the beat of the music, as opposed to shifted away from it. Moreover, the size of the effect of alignment with the beat on the cortical response correlated strongly with the ability to tap to a beat, suggesting that the ability to synchronize to the beat of simple isochronous stimuli and the ability to track the beat of complex, ecologically valid stimuli may rely on overlapping neural resources. These results suggest that the perception of musical rhythm may have robust effects on processing throughout the auditory system. PMID:23717268
Application of Cortical Processing Theory to Acoustical Analysis
2007-07-27
attenuations of 80 dB. All stimuli were presented binaurally at 60 dB SPL. 4.2 Human performance (see also Appendix B.) Listeners The listeners in all...fepeat - beat vee - bee jt-gtwad_ - rod got - dot Table A.2. A sample of the outcome of one DRT session, one stimulus condition, and one subject. A...to the speech materials binaurally under Sennheiser HD580 headphones. On the first visit, each subject was given a pure tone hearing test to document
Koelewijn, Thomas; Zekveld, Adriana A; Festen, Joost M; Kramer, Sophia E
2014-03-01
A recent pupillometry study on adults with normal hearing indicates that the pupil response during speech perception (cognitive processing load) is strongly affected by the type of speech masker. The current study extends these results by recording the pupil response in 32 participants with hearing impairment (mean age 59 yr) while they were listening to sentences masked by fluctuating noise or a single-talker. Efforts were made to improve audibility of all sounds by means of spectral shaping. Additionally, participants performed tests measuring verbal working memory capacity, inhibition of interfering information in working memory, and linguistic closure. The results showed worse speech reception thresholds for speech masked by single-talker speech compared to fluctuating noise. In line with previous results for participants with normal hearing, the pupil response was larger when listening to speech masked by a single-talker compared to fluctuating noise. Regression analysis revealed that larger working memory capacity and better inhibition of interfering information related to better speech reception thresholds, but these variables did not account for inter-individual differences in the pupil response. In conclusion, people with hearing impairment show more cognitive load during speech processing when there is interfering speech compared to fluctuating noise.
Zenker Castro, Franz; Fernández Belda, Rafael; Barajas de Prat, José Juan
2008-12-01
In this study we present a case of a 71-year-old female patient with sensorineural hearing loss and fitted with bilateral hearing aids. The patient complained of scant benefit from the hearing aid fitting with difficulties in understanding speech with background noise. The otolaryngology examination was normal. Audiological tests revealed bilateral sensorineural hearing loss with threshold values of 51 and 50 dB HL in the right and left ear. The Dichotic Digit Test was administered in a divided attention mode and focalizing the attention to each ear. Results in this test are consistent with a Central Auditory Processing Disorder.
Linguistic Deterioration in Alzheimer's Senile Dementia and in Normal Aging.
ERIC Educational Resources Information Center
Emery, Olga Beattie
A study of language patterning as an indicator of higher cortical process focused on three matched comparison groups: normal pre-middle-aged, normal elderly, and elderly adults with senile dementia Alzheimer's type. In addition to tests of memory, level of cognitive function, and organic deficit, the formal aspects of language were analyzed in…
Intelligibility of Digital Speech Masked by Noise: Normal Hearing and Hearing Impaired Listeners
1990-06-01
spectrograms of these phrases were generated by a List 13 Processing Language (LISP) on a Symbolics 3670 artificial intelligence computer (see Figure 10). The...speech and the amount of difference varies with the type of vocoder. 26 ADPC INTELIGIBILITY AND TOE OF MAING 908 78- INTELLIGIBILITY 48 LI OS NORMA 30
Melo, Renato de Souza; Amorim da Silva, Polyanna Waleska; Souza, Robson Arruda; Raposo, Maria Cristina Falcão; Ferraz, Karla Mônica
2013-10-01
Introduction Head sense position is coordinated by sensory activity of the vestibular system, located in the inner ear. Children with sensorineural hearing loss may show changes in the vestibular system as a result of injury to the inner ear, which can alter the sense of head position in this population. Aim Analyze the head alignment in students with normal hearing and students with sensorineural hearing loss and compare the data between groups. Methods This prospective cross-sectional study examined the head alignment of 96 students, 48 with normal hearing and 48 with sensorineural hearing loss, aged between 7 and 18 years. The analysis of head alignment occurred through postural assessment performed according to the criteria proposed by Kendall et al. For data analysis we used the chi-square test or Fisher exact test. Results The students with hearing loss had a higher occurrence of changes in the alignment of the head than normally hearing students (p < 0.001). Forward head posture was the type of postural change observed most, occurring in greater proportion in children with hearing loss (p < 0.001), followed by the side slope head posture (p < 0.001). Conclusion Children with sensorineural hearing loss showed more changes in the head posture compared with children with normal hearing.
Melo, Renato de Souza; Amorim da Silva, Polyanna Waleska; Souza, Robson Arruda; Raposo, Maria Cristina Falcão; Ferraz, Karla Mônica
2013-01-01
Introduction Head sense position is coordinated by sensory activity of the vestibular system, located in the inner ear. Children with sensorineural hearing loss may show changes in the vestibular system as a result of injury to the inner ear, which can alter the sense of head position in this population. Aim Analyze the head alignment in students with normal hearing and students with sensorineural hearing loss and compare the data between groups. Methods This prospective cross-sectional study examined the head alignment of 96 students, 48 with normal hearing and 48 with sensorineural hearing loss, aged between 7 and 18 years. The analysis of head alignment occurred through postural assessment performed according to the criteria proposed by Kendall et al. For data analysis we used the chi-square test or Fisher exact test. Results The students with hearing loss had a higher occurrence of changes in the alignment of the head than normally hearing students (p < 0.001). Forward head posture was the type of postural change observed most, occurring in greater proportion in children with hearing loss (p < 0.001), followed by the side slope head posture (p < 0.001). Conclusion Children with sensorineural hearing loss showed more changes in the head posture compared with children with normal hearing. PMID:25992037
ERIC Educational Resources Information Center
Nelson, David A.
1991-01-01
Forward-masked psychophysical tuning curves were obtained at multiple probe levels from 26 normal-hearing listeners and 24 ears of 21 hearing-impaired listeners with cochlear hearing loss. Results indicated that some cochlear hearing losses influence the sharp tuning capabilities usually associated with outer hair cell function. (Author/JDD)
Bernstein, Lynne E.; Jiang, Jintao; Pantazis, Dimitrios; Lu, Zhong-Lin; Joshi, Anand
2011-01-01
The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and non-speech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to non-speech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and non-speech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study. PMID:20853377
Enhanced Somatosensory Feedback Reduces Prefrontal Cortical Activity During Walking in Older Adults
Christou, Evangelos A.; Ring, Sarah A.; Williamson, John B.; Doty, Leilani
2014-01-01
Background. The coordination of steady state walking is relatively automatic in healthy humans, such that active attention to the details of task execution and performance (controlled processing) is low. Somatosensation is a crucial input to the spinal and brainstem circuits that facilitate this automaticity. Impaired somatosensation in older adults may reduce automaticity and increase controlled processing, thereby contributing to deficits in walking function. The primary objective of this study was to determine if enhancing somatosensory feedback can reduce controlled processing during walking, as assessed by prefrontal cortical activation. Methods. Fourteen older adults (age 77.1±5.56 years) with mild mobility deficits and mild somatosensory deficits participated in this study. Functional near-infrared spectroscopy was used to quantify metabolic activity (tissue oxygenation index, TOI) in the prefrontal cortex. Prefrontal activity and gait spatiotemporal data were measured during treadmill walking and overground walking while participants wore normal shoes and under two conditions of enhanced somatosensation: wearing textured insoles and no shoes. Results. Relative to walking with normal shoes, textured insoles yielded a bilateral reduction of prefrontal cortical activity for treadmill walking (ΔTOI = −0.85 and −1.19 for left and right hemispheres, respectively) and for overground walking (ΔTOI = −0.51 and −0.66 for left and right hemispheres, respectively). Relative to walking with normal shoes, no shoes yielded lower prefrontal cortical activity for treadmill walking (ΔTOI = −0.69 and −1.13 for left and right hemispheres, respectively), but not overground walking. Conclusions. Enhanced somatosensation reduces prefrontal activity during walking in older adults. This suggests a less intensive utilization of controlled processing during walking. PMID:25112494
Wiggins, Ian M; Anderson, Carly A; Kitterick, Pádraig T; Hartley, Douglas E H
2016-09-01
Functional near-infrared spectroscopy (fNIRS) is a silent, non-invasive neuroimaging technique that is potentially well suited to auditory research. However, the reliability of auditory-evoked activation measured using fNIRS is largely unknown. The present study investigated the test-retest reliability of speech-evoked fNIRS responses in normally-hearing adults. Seventeen participants underwent fNIRS imaging in two sessions separated by three months. In a block design, participants were presented with auditory speech, visual speech (silent speechreading), and audiovisual speech conditions. Optode arrays were placed bilaterally over the temporal lobes, targeting auditory brain regions. A range of established metrics was used to quantify the reproducibility of cortical activation patterns, as well as the amplitude and time course of the haemodynamic response within predefined regions of interest. The use of a signal processing algorithm designed to reduce the influence of systemic physiological signals was found to be crucial to achieving reliable detection of significant activation at the group level. For auditory speech (with or without visual cues), reliability was good to excellent at the group level, but highly variable among individuals. Temporal-lobe activation in response to visual speech was less reliable, especially in the right hemisphere. Consistent with previous reports, fNIRS reliability was improved by averaging across a small number of channels overlying a cortical region of interest. Overall, the present results confirm that fNIRS can measure speech-evoked auditory responses in adults that are highly reliable at the group level, and indicate that signal processing to reduce physiological noise may substantially improve the reliability of fNIRS measurements. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Delays in auditory processing identified in preschool children with FASD
Stephen, Julia M.; Kodituwakku, Piyadasa W.; Kodituwakku, Elizabeth L.; Romero, Lucinda; Peters, Amanda M.; Sharadamma, Nirupama Muniswamy; Caprihan, Arvind; Coffman, Brian A.
2012-01-01
Background Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool aged children. Since sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Materials and Methods Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control children aged 3-6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multi-dipole spatio-temporal modeling technique (CSST – Ranken et al. 2002) to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. Results There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Discussion Auditory delay revealed by MEG in children with FASD may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. PMID:22458372
Delays in auditory processing identified in preschool children with FASD.
Stephen, Julia M; Kodituwakku, Piyadasa W; Kodituwakku, Elizabeth L; Romero, Lucinda; Peters, Amanda M; Sharadamma, Nirupama M; Caprihan, Arvind; Coffman, Brian A
2012-10-01
Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool-aged children. As sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control (HC) children aged 3 to 6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1,000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multidipole spatio-temporal modeling technique to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Auditory delay revealed by MEG in children with FASDs may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. Copyright © 2012 by the Research Society on Alcoholism.
Radwan, Heba Mohammed; El-Gharib, Amani Mohamed; Erfan, Adel Ali; Emara, Afaf Ahmad
2017-05-01
Delay in ABR and CAEPs wave latencies in children with type 1DM indicates that there is abnormality in the neural conduction in DM patients. The duration of DM has greater effect on auditory function than the control of DM. Diabetes mellitus (DM) is a common endocrine and metabolic disorder. Evoked potentials offer the possibility to perform a functional evaluation of neural pathways in the central nervous system. To investigate the effect of type 1 diabetes mellitus (T1DM) on auditory brain stem response (ABR) and cortical evoked potentials (CAEPs). This study included two groups: a control group (GI), which consisted of 20 healthy children with normal peripheral hearing, and a study group (GII), which consisted of 30 children with type I DM. Basic audiological evaluation, ABR, and CAEPs were done in both groups. Delayed absolute latencies of ABR and CAEPs waves were found. Amplitudes showed no significant difference between both groups. Positive correlation was found between ABR wave latencies and duration of DM. No correlation was found between ABR, CAEPs, and glycated hemoglobin.
Park, Esther; Tjia, Michelle; Zuo, Yi; Chen, Lu
2018-06-06
Retinoic acid (RA) and its receptors (RARs) are well established essential transcriptional regulators during embryonic development. Recent findings in cultured neurons identified an independent and critical post-transcriptional role of RA and RARα in the homeostatic regulation of excitatory and inhibitory synaptic transmission in mature neurons. However, the functional relevance of synaptic RA signaling in vivo has not been established. Here, using somatosensory cortex as a model system and the RARα conditional knock-out mouse as a tool, we applied multiple genetic manipulations to delete RARα postnatally in specific populations of cortical neurons, and asked whether synaptic RA signaling observed in cultured neurons is involved in cortical information processing in vivo Indeed, conditional ablation of RARα in mice via a CaMKIIα-Cre or a layer 5-Cre driver line or via somatosensory cortex-specific viral expression of Cre-recombinase impaired whisker-dependent texture discrimination, suggesting a critical requirement of RARα expression in L5 pyramidal neurons of somatosensory cortex for normal tactile sensory processing. Transcranial two-photon imaging revealed a significant increase in dendritic spine elimination on apical dendrites of somatosensory cortical layer 5 pyramidal neurons in these mice. Interestingly, the enhancement of spine elimination is whisker experience-dependent as whisker trimming rescued the spine elimination phenotype. Additionally, experiencing an enriched environment improved texture discrimination in RARα-deficient mice and reduced excessive spine pruning. Thus, RA signaling is essential for normal experience-dependent cortical circuit remodeling and sensory processing. SIGNIFICANCE STATEMENT The importance of synaptic RA signaling has been demonstrated in in vitro studies. However, whether RA signaling mediated by RARα contributes to neural circuit functions in vivo remains largely unknown. In this study, using a RARα conditional knock-out mouse, we performed multiple regional/cell-type-specific manipulation of RARα expression in the postnatal brain, and show that RARα signaling contributes to normal whisker-dependent texture discrimination as well as regulating spine dynamics of apical dendrites from layer (L5) pyramidal neurons in S1. Deletion of RARα in excitatory neurons in the forebrain induces elevated spine elimination and impaired sensory discrimination. Our study provides novel insights into the role of RARα signaling in cortical processing and experience-dependent spine maturation. Copyright © 2018 the authors 0270-6474/18/385277-12$15.00/0.
Print Knowledge of Preschool Children with Hearing Loss
ERIC Educational Resources Information Center
Werfel, Krystal L.; Lund, Emily; Schuele, C. Melanie
2015-01-01
Measures of print knowledge were compared across preschoolers with hearing loss and normal hearing. Alphabet knowledge did not differ between groups, but preschoolers with hearing loss performed lower on measures of print concepts and concepts of written words than preschoolers with normal hearing. Further study is needed in this area.
Martinich, S; Rosa, M G; Rocha-Miranda, C E
1990-01-01
The normal pattern of cytochrome oxidase (CO) activity in the posterior cortical areas of the South American opossum (Didelphis marsupialis aurita) was assessed both in horizontal sections of flattened cortices and in transversal cortical sections. The tangential distribution of CO activity was uniformly high in the striate cortex. In the peristriate region alternating bands of dense and weak staining occupied all the cortical layers with the exception of layer I. This observation suggests the existence of a functional segregation of visual processing in the peristriate cortex of the opossum similar to that present in phylogenetically more recent groups.
Best, Virginia; Mason, Christine R.; Swaminathan, Jayaganesh; Roverud, Elin; Kidd, Gerald
2017-01-01
In many situations, listeners with sensorineural hearing loss demonstrate reduced spatial release from masking compared to listeners with normal hearing. This deficit is particularly evident in the “symmetric masker” paradigm in which competing talkers are located to either side of a central target talker. However, there is some evidence that reduced target audibility (rather than a spatial deficit per se) under conditions of spatial separation may contribute to the observed deficit. In this study a simple “glimpsing” model (applied separately to each ear) was used to isolate the target information that is potentially available in binaural speech mixtures. Intelligibility of these glimpsed stimuli was then measured directly. Differences between normally hearing and hearing-impaired listeners observed in the natural binaural condition persisted for the glimpsed condition, despite the fact that the task no longer required segregation or spatial processing. This result is consistent with the idea that the performance of listeners with hearing loss in the spatialized mixture was limited by their ability to identify the target speech based on sparse glimpses, possibly as a result of some of those glimpses being inaudible. PMID:28147587
Makropoulos, Antonios; Robinson, Emma C; Schuh, Andreas; Wright, Robert; Fitzgibbon, Sean; Bozek, Jelena; Counsell, Serena J; Steinweg, Johannes; Vecchiato, Katy; Passerat-Palmbach, Jonathan; Lenz, Gregor; Mortari, Filippo; Tenev, Tencho; Duff, Eugene P; Bastiani, Matteo; Cordero-Grande, Lucilio; Hughes, Emer; Tusor, Nora; Tournier, Jacques-Donald; Hutter, Jana; Price, Anthony N; Teixeira, Rui Pedro A G; Murgasova, Maria; Victor, Suresh; Kelly, Christopher; Rutherford, Mary A; Smith, Stephen M; Edwards, A David; Hajnal, Joseph V; Jenkinson, Mark; Rueckert, Daniel
2018-06-01
The Developing Human Connectome Project (dHCP) seeks to create the first 4-dimensional connectome of early life. Understanding this connectome in detail may provide insights into normal as well as abnormal patterns of brain development. Following established best practices adopted by the WU-MINN Human Connectome Project (HCP), and pioneered by FreeSurfer, the project utilises cortical surface-based processing pipelines. In this paper, we propose a fully automated processing pipeline for the structural Magnetic Resonance Imaging (MRI) of the developing neonatal brain. This proposed pipeline consists of a refined framework for cortical and sub-cortical volume segmentation, cortical surface extraction, and cortical surface inflation, which has been specifically designed to address considerable differences between adult and neonatal brains, as imaged using MRI. Using the proposed pipeline our results demonstrate that images collected from 465 subjects ranging from 28 to 45 weeks post-menstrual age (PMA) can be processed fully automatically; generating cortical surface models that are topologically correct, and correspond well with manual evaluations of tissue boundaries in 85% of cases. Results improve on state-of-the-art neonatal tissue segmentation models and significant errors were found in only 2% of cases, where these corresponded to subjects with high motion. Downstream, these surfaces will enhance comparisons of functional and diffusion MRI datasets, supporting the modelling of emerging patterns of brain connectivity. Copyright © 2018 Elsevier Inc. All rights reserved.
Wang, Yang; Naylor, Graham; Kramer, Sophia E; Zekveld, Adriana A; Wendt, Dorothea; Ohlenforst, Barbara; Lunner, Thomas
People with hearing impairment are likely to experience higher levels of fatigue because of effortful listening in daily communication. This hearing-related fatigue might not only constrain their work performance but also result in withdrawal from major social roles. Therefore, it is important to understand the relationships between fatigue, listening effort, and hearing impairment by examining the evidence from both subjective and objective measurements. The aim of the present study was to investigate these relationships by assessing subjectively measured daily-life fatigue (self-report questionnaires) and objectively measured listening effort (pupillometry) in both normally hearing and hearing-impaired participants. Twenty-seven normally hearing and 19 age-matched participants with hearing impairment were included in this study. Two self-report fatigue questionnaires Need For Recovery and Checklist Individual Strength were given to the participants before the test session to evaluate the subjectively measured daily fatigue. Participants were asked to perform a speech reception threshold test with single-talker masker targeting a 50% correct response criterion. The pupil diameter was recorded during the speech processing, and we used peak pupil dilation (PPD) as the main outcome measure of the pupillometry. No correlation was found between subjectively measured fatigue and hearing acuity, nor was a group difference found between the normally hearing and the hearing-impaired participants on the fatigue scores. A significant negative correlation was found between self-reported fatigue and PPD. A similar correlation was also found between Speech Intelligibility Index required for 50% correct and PPD. Multiple regression analysis showed that factors representing "hearing acuity" and "self-reported fatigue" had equal and independent associations with the PPD during the speech in noise test. Less fatigue and better hearing acuity were associated with a larger pupil dilation. To the best of our knowledge, this is the first study to investigate the relationship between a subjective measure of daily-life fatigue and an objective measure of pupil dilation, as an indicator of listening effort. These findings help to provide an empirical link between pupil responses, as observed in the laboratory, and daily-life fatigue.
Principles of Temporal Processing Across the Cortical Hierarchy.
Himberger, Kevin D; Chien, Hsiang-Yun; Honey, Christopher J
2018-05-02
The world is richly structured on multiple spatiotemporal scales. In order to represent spatial structure, many machine-learning models repeat a set of basic operations at each layer of a hierarchical architecture. These iterated spatial operations - including pooling, normalization and pattern completion - enable these systems to recognize and predict spatial structure, while robust to changes in the spatial scale, contrast and noisiness of the input signal. Because our brains also process temporal information that is rich and occurs across multiple time scales, might the brain employ an analogous set of operations for temporal information processing? Here we define a candidate set of temporal operations, and we review evidence that they are implemented in the mammalian cerebral cortex in a hierarchical manner. We conclude that multiple consecutive stages of cortical processing can be understood to perform temporal pooling, temporal normalization and temporal pattern completion. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Motivation to Address Self-Reported Hearing Problems in Adults with Normal Hearing Thresholds
ERIC Educational Resources Information Center
Alicea, Carly C. M.; Doherty, Karen A.
2017-01-01
Purpose: The purpose of this study was to compare the motivation to change in relation to hearing problems in adults with normal hearing thresholds but who report hearing problems and that of adults with a mild-to-moderate sensorineural hearing loss. Factors related to their motivation were also assessed. Method: The motivation to change in…
Thibodeau, Linda
2014-06-01
The purpose of this study was to compare the benefits of 3 types of remote microphone hearing assistance technology (HAT), adaptive digital broadband, adaptive frequency modulation (FM), and fixed FM, through objective and subjective measures of speech recognition in clinical and real-world settings. Participants included 11 adults, ages 16 to 78 years, with primarily moderate-to-severe bilateral hearing impairment (HI), who wore binaural behind-the-ear hearing aids; and 15 adults, ages 18 to 30 years, with normal hearing. Sentence recognition in quiet and in noise and subjective ratings were obtained in 3 conditions of wireless signal processing. Performance by the listeners with HI when using the adaptive digital technology was significantly better than that obtained with the FM technology, with the greatest benefits at the highest noise levels. The majority of listeners also preferred the digital technology when listening in a real-world noisy environment. The wireless technology allowed persons with HI to surpass persons with normal hearing in speech recognition in noise, with the greatest benefit occurring with adaptive digital technology. The use of adaptive digital technology combined with speechreading cues would allow persons with HI to engage in communication in environments that would have otherwise not been possible with traditional wireless technology.
Piquado, Tepring; Benichov, Jonathan I.; Brownell, Hiram; Wingfield, Arthur
2013-01-01
Objective The purpose of this research was to determine whether negative effects of hearing loss on recall accuracy for spoken narratives can be mitigated by allowing listeners to control the rate of speech input. Design Paragraph-length narratives were presented for recall under two listening conditions in a within-participants design: presentation without interruption (continuous) at an average speech-rate of 150 words per minute; and presentation interrupted at periodic intervals at which participants were allowed to pause before initiating the next segment (self-paced). Study sample Participants were 24 adults ranging from 21 to 33 years of age. Half had age-normal hearing acuity and half had mild-to-moderate hearing loss. The two groups were comparable for age, years of formal education, and vocabulary. Results When narrative passages were presented continuously, without interruption, participants with hearing loss recalled significantly fewer story elements, both main ideas and narrative details, than those with age-normal hearing. The recall difference was eliminated when the two groups were allowed to self-pace the speech input. Conclusion Results support the hypothesis that the listening effort associated with reduced hearing acuity can slow processing operations and increase demands on working memory, with consequent negative effects on accuracy of narrative recall. PMID:22731919
Paraouty, Nihaad; Ewert, Stephan D; Wallaert, Nicolas; Lorenzi, Christian
2016-07-01
Frequency modulation (FM) and amplitude modulation (AM) detection thresholds were measured for a 500-Hz carrier frequency and a 5-Hz modulation rate. For AM detection, FM at the same rate as the AM was superimposed with varying FM depth. For FM detection, AM at the same rate was superimposed with varying AM depth. The target stimuli always contained both amplitude and frequency modulations, while the standard stimuli only contained the interfering modulation. Young and older normal-hearing listeners, as well as older listeners with mild-to-moderate sensorineural hearing loss were tested. For all groups, AM and FM detection thresholds were degraded in the presence of the interfering modulation. AM detection with and without interfering FM was hardly affected by either age or hearing loss. While aging had an overall detrimental effect on FM detection with and without interfering AM, there was a trend that hearing loss further impaired FM detection in the presence of AM. Several models using optimal combination of temporal-envelope cues at the outputs of off-frequency filters were tested. The interfering effects could only be predicted for hearing-impaired listeners. This indirectly supports the idea that, in addition to envelope cues resulting from FM-to-AM conversion, normal-hearing listeners use temporal fine-structure cues for FM detection.
Clinical applications of selected binaural effects.
Noffsinger, D
1982-01-01
Examination was made of the behaviors exhibited on selected binaural tasks by 556 persons with diagnosed peripheral hearing loss or central nervous system damage. The tasks used included loudness balancing (LB), intracranial midline imaging (MI), masking level differences (MLD), and binaural beats (BB). The methods used were chosen for their clinical utility. Loudness balancing and midline imaging were of the most diagnostic value when hearing loss was present. Masking level differences were best at detecting pathology which did not produce hearing loss. None of the techniques were sensitive to cortical damage.
Verbal Working Memory in Children With Cochlear Implants
Caldwell-Tarr, Amanda; Low, Keri E.; Lowenstein, Joanna H.
2017-01-01
Purpose Verbal working memory in children with cochlear implants and children with normal hearing was examined. Participants Ninety-three fourth graders (47 with normal hearing, 46 with cochlear implants) participated, all of whom were in a longitudinal study and had working memory assessed 2 years earlier. Method A dual-component model of working memory was adopted, and a serial recall task measured storage and processing. Potential predictor variables were phonological awareness, vocabulary knowledge, nonverbal IQ, and several treatment variables. Potential dependent functions were literacy, expressive language, and speech-in-noise recognition. Results Children with cochlear implants showed deficits in storage and processing, similar in size to those at second grade. Predictors of verbal working memory differed across groups: Phonological awareness explained the most variance in children with normal hearing; vocabulary explained the most variance in children with cochlear implants. Treatment variables explained little of the variance. Where potentially dependent functions were concerned, verbal working memory accounted for little variance once the variance explained by other predictors was removed. Conclusions The verbal working memory deficits of children with cochlear implants arise due to signal degradation, which limits their abilities to acquire phonological awareness. That hinders their abilities to store items using a phonological code. PMID:29075747
Speaker recognition with temporal cues in acoustic and electric hearing
NASA Astrophysics Data System (ADS)
Vongphoe, Michael; Zeng, Fan-Gang
2005-08-01
Natural spoken language processing includes not only speech recognition but also identification of the speaker's gender, age, emotional, and social status. Our purpose in this study is to evaluate whether temporal cues are sufficient to support both speech and speaker recognition. Ten cochlear-implant and six normal-hearing subjects were presented with vowel tokens spoken by three men, three women, two boys, and two girls. In one condition, the subject was asked to recognize the vowel. In the other condition, the subject was asked to identify the speaker. Extensive training was provided for the speaker recognition task. Normal-hearing subjects achieved nearly perfect performance in both tasks. Cochlear-implant subjects achieved good performance in vowel recognition but poor performance in speaker recognition. The level of the cochlear implant performance was functionally equivalent to normal performance with eight spectral bands for vowel recognition but only to one band for speaker recognition. These results show a disassociation between speech and speaker recognition with primarily temporal cues, highlighting the limitation of current speech processing strategies in cochlear implants. Several methods, including explicit encoding of fundamental frequency and frequency modulation, are proposed to improve speaker recognition for current cochlear implant users.
P300 and LORETA: comparison of normal subjects and schizophrenic patients.
Winterer, G; Mulert, C; Mientus, S; Gallinat, J; Schlattmann, P; Dorn, H; Herrmann, W M
2001-01-01
It was the aim of the present study 1) to investigate how many cortical activity maxima of scalp-recorded P300 are detected by Low Resolution Electromagentic Tomography (LORETA) when analyses are performed with high time-resolution, 2) to see if the resulting LORETA-solution is in accordance with intracortical recordings as reported by others and 3) to compare the given pattern of cortical activation maxima in the P300-timeframe between schizophrenic patients and normal controls. Current density analysis was performed in 3-D Talairach space with high time resolution i.e. in 6 ms steps. This was done during an auditory choice reaction paradigm separately for normal subjects and schizophrenic patients with subsequent group comparisons. In normal subjects, a sequence of at least seven cortical activation maxima was found between 240-420ms poststimulus: the prefrontal cortex, anterior or medial cingulum, posterior cingulum, parietal cortex, temporal lobe, prefrontal cortex, medial or anterior cingulum. Within the given limits of spatial resolution, this sequential maxima distribution largely met the expectations from reports on intracranial recordings and functional neuroimaging studies. However, localization accuracy was higher near the central midline than at lateral aspects of the brain. Schizophrenic patients less activated their cortex in a widespread area mainly in the left hemisphere including the prefrontal cortex, posterior cingulum and the temporal lobe. From these analyses and comparsions with intracranial recordings as reported by others, it is concluded that LORETA correctly localizes P300-related cortical activity maxima on the basis of 19 electrodes except for lateral cortical aspects which is most likely an edge-phenomenon. The data further suggest that the P300-deficit in schizophrenics involves an extended cortical network of the left hemisphere at several steps in time during the information processing stream.
The relationship between loudness intensity functions and the click-ABR wave V latency.
Serpanos, Y C; O'Malley, H; Gravel, J S
1997-10-01
To assess the relationship of loudness growth and the click-evoked auditory brain stem response (ABR) wave V latency-intensity function (LIF) in listeners with normal hearing or cochlear hearing loss. The effect of hearing loss configuration on the intensity functions was also examined. Behavioral and electrophysiological intensity functions were obtained using click stimuli of comparable intensities in listeners with normal hearing (Group I; n = 10), and cochlear hearing loss of flat (Group II; n = 10) or sloping (Group III; n = 10) configurations. Individual intensity functions were obtained from measures of loudness growth using the psychophysical methods of absolute magnitude estimation and production of loudness (geometrically averaged to provide the measured loudness function), and from the wave V latency measures of the ABR. Slope analyses for the behavioral and electrophysiological intensity functions were separately performed by group. The loudness growth functions for the groups with cochlear hearing loss approximated the normal function at high intensities, with overall slope values consistent with those reported from previous psychophysical research. The ABR wave V LIF for the group with a flat configuration of cochlear hearing loss approximated the normal function at high intensities, and was displaced parallel to the normal function for the group with sloping configuration. The relationship between the behavioral and electrophysiological intensity functions was examined at individual intensities across the range of the functions for each subject. A significant relationship was obtained between loudness and the ABR wave V LIFs for the groups with normal hearing and flat configuration of cochlear hearing loss; the association was not significant (p = 0.10) for the group with a sloping configuration of cochlear hearing loss. The results of this study established a relationship between loudness and the ABR wave V latency for listeners with normal hearing, and flat cochlear hearing loss. In listeners with a sloping configuration of cochlear hearing loss, the relationship was not significant. This suggests that the click-evoked ABR may be used to estimate loudness growth at least for individuals with normal hearing and those with a flat configuration of cochlear hearing loss. Predictive equations were derived to estimate loudness growth for these groups. The use of frequency-specific stimuli may provide more precise information on the nature of the relationship between loudness growth and the ABR wave V latency, particularly for listeners with sloping configurations of cochlear hearing loss.
Firszt, Jill B; Reeder, Ruth M; Holden, Laura K
At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of covariables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc), and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-sex-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal-hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal-hearing participant groups were not significantly different for speech in noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments, and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates.
Decision strategies of hearing-impaired listeners in spectral shape discrimination
NASA Astrophysics Data System (ADS)
Lentz, Jennifer J.; Leek, Marjorie R.
2002-03-01
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners detected a 920-Hz tone added in phase to a single component of a standard consisting of the sum of five tones spaced equally on a logarithmic frequency scale ranging from 200 to 4200 Hz. An overall level randomization of 10 dB was either present or absent. In one subset of conditions, the no-perturbation conditions, the standard stimulus was the sum of equal-amplitude tones. In the perturbation conditions, the amplitudes of the components within a stimulus were randomly altered on every presentation. For both perturbation and no-perturbation conditions, thresholds for the detection of the 920-Hz tone were measured to compare sensitivity to changes in spectral shape between normal-hearing and hearing-impaired listeners. To assess whether hearing-impaired listeners relied on different regions of the spectrum to discriminate between sounds, spectral weights were estimated from the perturbed standards by correlating the listener's responses with the level differences per component across two intervals of a two-alternative forced-choice task. Results showed that hearing-impaired and normal-hearing listeners had similar sensitivity to changes in spectral shape. On average, across-frequency correlation functions also were similar for both groups of listeners, suggesting that as long as all components are audible and well separated in frequency, hearing-impaired listeners can use information across frequency as well as normal-hearing listeners. Analysis of the individual data revealed, however, that normal-hearing listeners may be better able to adopt optimal weighting schemes. This conclusion is only tentative, as differences in internal noise may need to be considered to interpret the results obtained from weighting studies between normal-hearing and hearing-impaired listeners.
A fast, model-independent method for cerebral cortical thickness estimation using MRI.
Scott, M L J; Bromiley, P A; Thacker, N A; Hutchinson, C E; Jackson, A
2009-04-01
Several algorithms for measuring the cortical thickness in the human brain from MR image volumes have been described in the literature, the majority of which rely on fitting deformable models to the inner and outer cortical surfaces. However, the constraints applied during the model fitting process in order to enforce spherical topology and to fit the outer cortical surface in narrow sulci, where the cerebrospinal fluid (CSF) channel may be obscured by partial voluming, may introduce bias in some circumstances, and greatly increase the processor time required. In this paper we describe an alternative, voxel based technique that measures the cortical thickness using inversion recovery anatomical MR images. Grey matter, white matter and CSF are identified through segmentation, and edge detection is used to identify the boundaries between these tissues. The cortical thickness is then measured along the local 3D surface normal at every voxel on the inner cortical surface. The method was applied to 119 normal volunteers, and validated through extensive comparisons with published measurements of both cortical thickness and rate of thickness change with age. We conclude that the proposed technique is generally faster than deformable model-based alternatives, and free from the possibility of model bias, but suffers no reduction in accuracy. In particular, it will be applicable in data sets showing severe cortical atrophy, where thinning of the gyri leads to points of high curvature, and so the fitting of deformable models is problematic.
How close should a student with unilateral hearing loss stay to a teacher in a noisy classroom?
Noh, Heil; Park, Yong-Gyu
2012-06-01
To determine the optimal seating position in a noisy classroom for students with unilateral hearing loss (UHL) without any auditory rehabilitation as compared to normal-hearing adults and student peers. Speech discrimination scores (SDS) for babble noise at distances of 3, 4, 6, 8, and 10 m from a speaker were measured in a simulated classroom measuring 300 m3 (reverberation time = 0.43 s). Students with UHL (n = 25, 10-19 years old), normal-hearing students (n = 25), and normal-hearing adults (n = 25). The SDS for the normal-hearing adults at the 3, 4, 6, 8, and 10 m distances were 90.0±6.4%, 84.7±7.9%, 80.6±10.0%, 75.5±12.6%, and 68.8±13.0%, respectively. Those for the normal-hearing students were 90.1±6.2%, 78.1±9.4%, 66.4±10.7%, 61.8±11.2%, and 60.8±10.9%. Those for the UHL group were 81.7±9.0%, 70.2±12.4%, 62.1±17.2%, 52.4±17.1%, and 48.9±17.9%. The UHL group needed a seating position of 4.35 m to achieve an equivalent mean SDS as those for normal-hearing adults seated at 10 m. Likewise, the UHL group needed to be seated at 6.27 m to have an equivalent SDS as the normal-hearing students seated at 10 m. Students with UHL in noisy classrooms require seating ranging from 4.35 m to no further than 6.27 m away from a teacher to obtain a SDS comparable to normal hearing adults and student peers.
Sommers, M S; Kirk, K I; Pisoni, D B
1997-04-01
The purpose of the present studies was to assess the validity of using closed-set response formats to measure two cognitive processes essential for recognizing spoken words---perceptual normalization (the ability to accommodate acoustic-phonetic variability) and lexical discrimination (the ability to isolate words in the mental lexicon). In addition, the experiments were designed to examine the effects of response format on evaluation of these two abilities in normal-hearing (NH), noise-masked normal-hearing (NMNH), and cochlear implant (CI) subject populations. The speech recognition performance of NH, NMNH, and CI listeners was measured using both open- and closed-set response formats under a number of experimental conditions. To assess talker normalization abilities, identification scores for words produced by a single talker were compared with recognition performance for items produced by multiple talkers. To examine lexical discrimination, performance for words that are phonetically similar to many other words (hard words) was compared with scores for items with few phonetically similar competitors (easy words). Open-set word identification for all subjects was significantly poorer when stimuli were produced in lists with multiple talkers compared with conditions in which all of the words were spoken by a single talker. Open-set word recognition also was better for lexically easy compared with lexically hard words. Closed-set tests, in contrast, failed to reveal the effects of either talker variability or lexical difficulty even when the response alternatives provided were systematically selected to maximize confusability with target items. These findings suggest that, although closed-set tests may provide important information for clinical assessment of speech perception, they may not adequately evaluate a number of cognitive processes that are necessary for recognizing spoken words. The parallel results obtained across all subject groups indicate that NH, NMNH, and CI listeners engage similar perceptual operations to identify spoken words. Implications of these findings for the design of new test batteries that can provide comprehensive evaluations of the individual capacities needed for processing spoken language are discussed.
2014-01-01
Localizing a sound source requires the auditory system to determine its direction and its distance. In general, hearing-impaired listeners do less well in experiments measuring localization performance than normal-hearing listeners, and hearing aids often exacerbate matters. This article summarizes the major experimental effects in direction (and its underlying cues of interaural time differences and interaural level differences) and distance for normal-hearing, hearing-impaired, and aided listeners. Front/back errors and the importance of self-motion are noted. The influence of vision on the localization of real-world sounds is emphasized, such as through the ventriloquist effect or the intriguing link between spatial hearing and visual attention. PMID:25492094
Wiefferink, Carin H; Rieffe, Carolien; Ketelaar, Lizet; Frijns, Johan H M
2012-06-01
The purpose of the present study was to compare children with a cochlear implant and normal hearing children on aspects of emotion regulation (emotion expression and coping strategies) and social functioning (social competence and externalizing behaviors) and the relation between emotion regulation and social functioning. Participants were 69 children with cochlear implants (CI children) and 67 normal hearing children (NH children) aged 1.5-5 years. Parents answered questionnaires about their children's language skills, social functioning, and emotion regulation. Children also completed simple tasks to measure their emotion regulation abilities. Cochlear implant children had fewer adequate emotion regulation strategies and were less socially competent than normal hearing children. The parents of cochlear implant children did not report fewer externalizing behaviors than those of normal hearing children. While social competence in normal hearing children was strongly related to emotion regulation, cochlear implant children regulated their emotions in ways that were unrelated with social competence. On the other hand, emotion regulation explained externalizing behaviors better in cochlear implant children than in normal hearing children. While better language skills were related to higher social competence in both groups, they were related to fewer externalizing behaviors only in cochlear implant children. Our results indicate that cochlear implant children have less adequate emotion-regulation strategies and less social competence than normal hearing children. Since they received their implants relatively recently, they might eventually catch up with their hearing peers. Longitudinal studies should further explore the development of emotion regulation and social functioning in cochlear implant children. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
The Brain/MINDS 3D digital marmoset brain atlas
Woodward, Alexander; Hashikawa, Tsutomu; Maeda, Masahide; Kaneko, Takaaki; Hikishima, Keigo; Iriki, Atsushi; Okano, Hideyuki; Yamaguchi, Yoko
2018-01-01
We present a new 3D digital brain atlas of the non-human primate, common marmoset monkey (Callithrix jacchus), with MRI and coregistered Nissl histology data. To the best of our knowledge this is the first comprehensive digital 3D brain atlas of the common marmoset having normalized multi-modal data, cortical and sub-cortical segmentation, and in a common file format (NIfTI). The atlas can be registered to new data, is useful for connectomics, functional studies, simulation and as a reference. The atlas was based on previously published work but we provide several critical improvements to make this release valuable for researchers. Nissl histology images were processed to remove illumination and shape artifacts and then normalized to the MRI data. Brain region segmentation is provided for both hemispheres. The data is in the NIfTI format making it easy to integrate into neuroscience pipelines, whereas the previous atlas was in an inaccessible file format. We also provide cortical, mid-cortical and white matter boundary segmentations useful for visualization and analysis. PMID:29437168
Jin, Huiyuan; Liu, Haitao
2016-01-01
Deaf or hard-of-hearing individuals usually face a greater challenge to learn to write than their normal-hearing counterparts. Due to the limitations of traditional research methods focusing on microscopic linguistic features, a holistic characterization of the writing linguistic features of these language users is lacking. This study attempts to fill this gap by adopting the methodology of linguistic complex networks. Two syntactic dependency networks are built in order to compare the macroscopic linguistic features of deaf or hard-of-hearing students and those of their normal-hearing peers. One is transformed from a treebank of writing produced by Chinese deaf or hard-of-hearing students, and the other from a treebank of writing produced by their Chinese normal-hearing counterparts. Two major findings are obtained through comparison of the statistical features of the two networks. On the one hand, both linguistic networks display small-world and scale-free network structures, but the network of the normal-hearing students' exhibits a more power-law-like degree distribution. Relevant network measures show significant differences between the two linguistic networks. On the other hand, deaf or hard-of-hearing students tend to have a lower language proficiency level in both syntactic and lexical aspects. The rigid use of function words and a lower vocabulary richness of the deaf or hard-of-hearing students may partially account for the observed differences.
Sound localization in noise in hearing-impaired listeners.
Lorenzi, C; Gatehouse, S; Lever, C
1999-06-01
The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.
Moreno-Gómez, Felipe N.; Véliz, Guillermo; Rojas, Marcos; Martínez, Cristián; Olmedo, Rubén; Panussis, Felipe; Dagnino-Subiabre, Alexies; Delgado, Carolina; Delano, Paul H.
2017-01-01
The perception of music depends on the normal function of the peripheral and central auditory system. Aged subjects without hearing loss have altered music perception, including pitch and temporal features. Presbycusis or age-related hearing loss is a frequent condition in elderly people, produced by neurodegenerative processes that affect the cochlear receptor cells and brain circuits involved in auditory perception. Clinically, presbycusis patients have bilateral high-frequency hearing loss and deteriorated speech intelligibility. Music impairments in presbycusis subjects can be attributed to the normal aging processes and to presbycusis neuropathological changes. However, whether presbycusis further impairs music perception remains controversial. Here, we developed a computerized version of the Montreal battery of evaluation of amusia (MBEA) and assessed music perception in 175 Chilean adults aged between 18 and 90 years without hearing complaints and in symptomatic presbycusis patients. We give normative data for MBEA performance in a Latin-American population, showing age and educational effects. In addition, we found that symptomatic presbycusis was the most relevant factor determining global MBEA accuracy in aged subjects. Moreover, we show that melodic impairments in presbycusis individuals were diminished by music training, while the performance in temporal tasks were affected by the educational level and music training. We conclude that music training and education are important factors as they can slow the deterioration of music perception produced by age-related hearing loss. PMID:28579956
Moreno-Gómez, Felipe N; Véliz, Guillermo; Rojas, Marcos; Martínez, Cristián; Olmedo, Rubén; Panussis, Felipe; Dagnino-Subiabre, Alexies; Delgado, Carolina; Delano, Paul H
2017-01-01
The perception of music depends on the normal function of the peripheral and central auditory system. Aged subjects without hearing loss have altered music perception, including pitch and temporal features. Presbycusis or age-related hearing loss is a frequent condition in elderly people, produced by neurodegenerative processes that affect the cochlear receptor cells and brain circuits involved in auditory perception. Clinically, presbycusis patients have bilateral high-frequency hearing loss and deteriorated speech intelligibility. Music impairments in presbycusis subjects can be attributed to the normal aging processes and to presbycusis neuropathological changes. However, whether presbycusis further impairs music perception remains controversial. Here, we developed a computerized version of the Montreal battery of evaluation of amusia (MBEA) and assessed music perception in 175 Chilean adults aged between 18 and 90 years without hearing complaints and in symptomatic presbycusis patients. We give normative data for MBEA performance in a Latin-American population, showing age and educational effects. In addition, we found that symptomatic presbycusis was the most relevant factor determining global MBEA accuracy in aged subjects. Moreover, we show that melodic impairments in presbycusis individuals were diminished by music training, while the performance in temporal tasks were affected by the educational level and music training. We conclude that music training and education are important factors as they can slow the deterioration of music perception produced by age-related hearing loss.
Cortical Odor Processing in Health and Disease
Wilson, Donald A.; Xu, Wenjin; Sadrian, Benjamin; Courtiol, Emmanuelle; Cohen, Yaniv; Barnes, Dylan C.
2014-01-01
The olfactory system has a rich cortical representation, including a large archicortical component present in most vertebrates, and in mammals neocortical components including the entorhinal and orbitofrontal cortices. Together, these cortical components contribute to normal odor perception and memory. They help transform the physicochemical features of volatile molecules inhaled or exhaled through the nose into the perception of odor objects with rich associative and hedonic aspects. This chapter focuses on how olfactory cortical areas contribute to odor perception and begins to explore why odor perception is so sensitive to disease and pathology. Odor perception is disrupted by a wide range of disorders including Alzheimer’s disease, Parkinson’s disease, schizophrenia, depression, autism, and early life exposure to toxins. This olfactory deficit often occurs despite maintained functioning in other sensory systems. Does the unusual network of olfactory cortical structures contribute to this sensitivity? PMID:24767487
Background noise exerts diverse effects on the cortical encoding of foreground sounds.
Malone, B J; Heiser, Marc A; Beitel, Ralph E; Schreiner, Christoph E
2017-08-01
In natural listening conditions, many sounds must be detected and identified in the context of competing sound sources, which function as background noise. Traditionally, noise is thought to degrade the cortical representation of sounds by suppressing responses and increasing response variability. However, recent studies of neural network models and brain slices have shown that background synaptic noise can improve the detection of signals. Because acoustic noise affects the synaptic background activity of cortical networks, it may improve the cortical responses to signals. We used spike train decoding techniques to determine the functional effects of a continuous white noise background on the responses of clusters of neurons in auditory cortex to foreground signals, specifically frequency-modulated sweeps (FMs) of different velocities, directions, and amplitudes. Whereas the addition of noise progressively suppressed the FM responses of some cortical sites in the core fields with decreasing signal-to-noise ratios (SNRs), the stimulus representation remained robust or was even significantly enhanced at specific SNRs in many others. Even though the background noise level was typically not explicitly encoded in cortical responses, significant information about noise context could be decoded from cortical responses on the basis of how the neural representation of the foreground sweeps was affected. These findings demonstrate significant diversity in signal in noise processing even within the core auditory fields that could support noise-robust hearing across a wide range of listening conditions. NEW & NOTEWORTHY The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may contribute to robust signal representation and discrimination in acoustic environments with prominent background noise. Copyright © 2017 the American Physiological Society.
Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela
2015-01-01
Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes. PMID:26233047
Ruggieri, Serena; Petracca, Maria; Miller, Aaron; Krieger, Stephen; Ghassemi, Rezwan; Bencosme, Yadira; Riley, Claire; Howard, Jonathan; Lublin, Fred; Inglese, Matilde
2015-12-01
The investigation of cortical gray matter (GM), deep GM nuclei, and spinal cord damage in patients with primary progressive multiple sclerosis (PP-MS) provides insights into the neurodegenerative process responsible for clinical progression of MS. To investigate the association of magnetic resonance imaging measures of cortical, deep GM, and spinal cord damage and their effect on clinical disability. Cross-sectional analysis of 26 patients with PP-MS (mean age, 50.9 years; range, 31-65 years; including 14 women) and 20 healthy control participants (mean age, 51.1 years; range, 34-63 years; including 11 women) enrolled at a single US institution. Clinical disability was measured with the Expanded Disability Status Scale, 9-Hole Peg Test, and 25-Foot Walking Test. We collected data from January 1, 2012, through December 31, 2013. Data analysis was performed from January 21 to April 10, 2015. Cortical lesion burden, brain and deep GM volumes, spinal cord area and volume, and scores on the Expanded Disability Status Scale (score range, 0 to 10; higher scores indicate greater disability), 9-Hole Peg Test (measured in seconds; longer performance time indicates greater disability), and 25-Foot Walking Test (test covers 7.5 m; measured in seconds; longer performance time indicates greater disability). The 26 patients with PP-MS showed significantly smaller mean (SD) brain and spinal cord volumes than the 20 control group patients (normalized brain volume, 1377.81 [65.48] vs 1434.06 [53.67] cm3 [P = .003]; normalized white matter volume, 650.61 [46.38] vs 676.75 [37.02] cm3 [P = .045]; normalized gray matter volume, 727.20 [40.74] vs 757.31 [38.95] cm3 [P = .02]; normalized neocortical volume, 567.88 [85.55] vs 645.00 [42.84] cm3 [P = .001]; normalized spinal cord volume for C2-C5, 72.71 [7.89] vs 82.70 [7.83] mm3 [P < .001]; and normalized spinal cord volume for C2-C3, 64.86 [7.78] vs 72.26 [7.79] mm3 [P =.002]). The amount of damage in deep GM structures, especially with respect to the thalamus, was correlated with the number and volume of cortical lesions (mean [SD] thalamus volume, 8.89 [1.10] cm3; cortical lesion number, 12.6 [11.7]; cortical lesion volume, 0.65 [0.58] cm3; r = -0.52; P < .01). Thalamic atrophy also showed an association with cortical lesion count in the frontal cortex (mean [SD] thalamus volume, 8.89 [1.1] cm3; cortical lesion count in the frontal lobe, 5.0 [5.7]; r = -0.60; P < .01). No association was identified between magnetic resonance imaging measures of the brain and spinal cord damage. In this study, the neurodegenerative process occurring in PP-MS appeared to spread across connected structures in the brain while proceeding independently in the spinal cord. These results support the relevance of anatomical connectivity for the propagation of MS damage in the PP phenotype.
Loudness of dynamic stimuli in acoustic and electric hearing.
Zhang, C; Zeng, F G
1997-11-01
Traditional loudness models have been based on the average energy and the critical band analysis of steady-state sounds. However, most environmental sounds, including speech, are dynamic stimuli, in which the average level [e.g., the root-mean-square (rms) level] does not account for the large temporal fluctuations. The question addressed here was whether two stimuli of the same rms level but different peak levels would produce an equal loudness sensation. A modern adaptive procedure was used to replicate two classic experiments demonstrating that the sensation of "beats" in a two- or three-tone complex resulted in a louder sensation [E. Zwicker and H. Fastl, Psychoacoustics-Facts and Models (Springer-Verlag, Berlin, 1990)]. Two additional experiments were conducted to study exclusively the effects of the temporal envelope on the loudness sensation of dynamic stimuli. Loudness balance was performed by normal-hearing listeners between a white noise and a sinusoidally amplitude-modulated noise in one experiment, and by cochlear implant listeners between two harmonic stimuli of the same magnitude spectra, but different phase spectra, in the other experiment. The results from both experiments showed that, for two stimuli of the same rms level, the stimulus with greater temporal fluctuations sometimes produced a significantly louder sensation, depending on the temporal frequency and overall stimulus level. In normal-hearing listeners, the louder sensation was produced for the amplitude-modulated stimuli with modulation frequencies lower than 400 Hz, and gradually disappeared above 400 Hz, resulting in a low-pass filtering characteristic which bore some similarity to the temporal modulation transfer function. The extent to which loudness was greater was a nonmonotonic function of level in acoustic hearing and a monotonically increasingly function in electric hearing. These results suggest that the loudness sensation of a dynamic stimulus is not limited to a 100-ms temporal integration process, and may be determined jointly by a compression process in the cochlea and an expansion process in the brain. A level-dependent compression scheme that may better restore normal loudness of dynamic stimuli in hearing aids and cochlear implants is proposed.
Firszt, Jill B.; Reeder, Ruth M.; Holden, Laura K.
2016-01-01
Objectives At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of co-variables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. Design The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc) and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-gender-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Results Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal hearing participant groups were not significantly different for speech-in-noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Conclusions Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates. PMID:28067750
Tischler, Hadass; Moran, Anan; Belelovsky, Katya; Bronfeld, Maya; Korngreen, Alon; Bar-Gad, Izhar
2012-12-01
Parkinsonism is associated with major changes in neuronal activity throughout the cortico-basal ganglia loop. Current measures quantify changes in baseline neuronal and network activity but do not capture alterations in information propagation throughout the system. Here, we applied a novel non-invasive magnetic stimulation approach using a custom-made mini-coil that enabled us to study transmission of neuronal activity throughout the cortico-basal ganglia loop in both normal and parkinsonian primates. By magnetically perturbing cortical activity while simultaneously recording neuronal responses along the cortico-basal ganglia loop, we were able to directly investigate modifications in descending cortical activity transmission. We found that in both the normal and parkinsonian states, cortical neurons displayed similar multi-phase firing rate modulations in response to magnetic stimulation. However, in the basal ganglia, large synaptically driven stereotypic neuronal modulation was present in the parkinsonian state that was mostly absent in the normal state. The stimulation-induced neuronal activity pattern highlights the change in information propagation along the cortico-basal ganglia loop. Our findings thus point to the role of abnormal dynamic activity transmission rather than changes in baseline activity as a major component in parkinsonian pathophysiology. Moreover, our results hint that the application of transcranial magnetic stimulation (TMS) in human patients of different disorders may result in different neuronal effects than the one induced in normal subjects. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Shahsavarani, Somayeh Bahar
High-level, top-down information such as linguistic knowledge is a salient cortical resource that influences speech perception under most listening conditions. But, are all listeners able to exploit these resources for speech facilitation to the same extent? It was found that children with cochlear implants showed different patterns of benefit from contextual information in speech perception compared with their normal-haring peers. Previous studies have discussed the role of non-acoustic factors such as linguistic and cognitive capabilities to account for this discrepancy. Given the fact that the amount of acoustic information encoded and processed by auditory nerves of listeners with cochlear implants differs from normal-hearing listeners and even varies across individuals with cochlear implants, it is important to study the interaction of specific acoustic properties of the speech signal with contextual cues. This relationship has been mostly neglected in previous research. In this dissertation, we aimed to explore how different acoustic dimensions interact to affect listeners' abilities to combine top-down information with bottom-up information in speech perception beyond the known effects of linguistic and cognitive capacities shown previously. Specifically, the present study investigated whether there were any distinct context effects based on the resolution of spectral versus slowly-varying temporal information in perception of spectrally impoverished speech. To that end, two experiments were conducted. In both experiments, a noise-vocoded technique was adopted to generate spectrally-degraded speech to approximate acoustic cues delivered to listeners with cochlear implants. The frequency resolution was manipulated by varying the number of frequency channels. The temporal resolution was manipulated by low-pass filtering of amplitude envelope with varying low-pass cutoff frequencies. The stimuli were presented to normal-hearing native speakers of American English. Our results revealed a significant interaction effect between spectral, temporal, and contextual information in the perception of spectrally-degraded speech. This suggests that specific types and degradation of bottom-up information combine differently to utilize contextual resources. These findings emphasize the importance of taking the listener's specific auditory abilities into consideration while studying context effects. These results also introduce a novel perspective for designing interventions for listeners with cochlear implants or other auditory prostheses.
Kujala, T; Kuuluvainen, S; Saalasti, S; Jansson-Verkasalo, E; von Wendt, L; Lepistö, T
2010-09-01
Asperger syndrome, belonging to the autistic spectrum of disorders, involves deficits in social interaction and prosodic use of language but normal development of formal language abilities. Auditory processing involves both hyper- and hypoactive reactivity to acoustic changes. Responses composed of mismatch negativity (MMN) and obligatory components were recorded for five types of deviations in syllables (vowel, vowel duration, consonant, syllable frequency, syllable intensity) with the multi-feature paradigm from 8-12-year old children with Asperger syndrome. Children with Asperger syndrome had larger MMNs for intensity and smaller MMNs for frequency changes than typically developing children, whereas no MMN group differences were found for the other deviant stimuli. Furthermore, children with Asperger syndrome performed more poorly than controls in Comprehension of Instructions subtest of a language test battery. Cortical speech-sound discrimination is aberrant in children with Asperger syndrome. This is evident both as hypersensitive and depressed neural reactions to speech-sound changes, and is associated with features (frequency, intensity) which are relevant for prosodic processing. The multi-feature MMN paradigm, which includes variation and thereby resembles natural speech hearing circumstances, suggests abnormal pattern of speech discrimination in Asperger syndrome, including both hypo- and hypersensitive responses for speech features. 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Postural control assessment in students with normal hearing and sensorineural hearing loss.
Melo, Renato de Souza; Lemos, Andrea; Macky, Carla Fabiana da Silva Toscano; Raposo, Maria Cristina Falcão; Ferraz, Karla Mônica
2015-01-01
Children with sensorineural hearing loss can present with instabilities in postural control, possibly as a consequence of hypoactivity of their vestibular system due to internal ear injury. To assess postural control stability in students with normal hearing (i.e., listeners) and with sensorineural hearing loss, and to compare data between groups, considering gender and age. This cross-sectional study evaluated the postural control of 96 students, 48 listeners and 48 with sensorineural hearing loss, aged between 7 and 18 years, of both genders, through the Balance Error Scoring Systems scale. This tool assesses postural control in two sensory conditions: stable surface and unstable surface. For statistical data analysis between groups, the Wilcoxon test for paired samples was used. Students with hearing loss showed more instability in postural control than those with normal hearing, with significant differences between groups (stable surface, unstable surface) (p<0.001). Students with sensorineural hearing loss showed greater instability in the postural control compared to normal hearing students of the same gender and age. Copyright © 2014 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Xia, Jing; Xu, Buye; Pentony, Shareka; Xu, Jingjing; Swaminathan, Jayaganesh
2018-03-01
Many hearing-aid wearers have difficulties understanding speech in reverberant noisy environments. This study evaluated the effects of reverberation and noise on speech recognition in normal-hearing listeners and hearing-impaired listeners wearing hearing aids. Sixteen typical acoustic scenes with different amounts of reverberation and various types of noise maskers were simulated using a loudspeaker array in an anechoic chamber. Results showed that, across all listening conditions, speech intelligibility of aided hearing-impaired listeners was poorer than normal-hearing counterparts. Once corrected for ceiling effects, the differences in the effects of reverberation on speech intelligibility between the two groups were much smaller. This suggests that, at least, part of the difference in susceptibility to reverberation between normal-hearing and hearing-impaired listeners was due to ceiling effects. Across both groups, a complex interaction between the noise characteristics and reverberation was observed on the speech intelligibility scores. Further fine-grained analyses of the perception of consonants showed that, for both listener groups, final consonants were more susceptible to reverberation than initial consonants. However, differences in the perception of specific consonant features were observed between the groups.
Marsella, Pasquale; Scorpecci, Alessandro; Vecchiato, Giovanni; Colosimo, Alfredo; Maglione, Anton Giulio; Babiloni, Fabio
2014-05-01
To investigate by means of non-invasive neuroelectrical imaging the differences in the perceived pleasantness of music between children with cochlear implants (CI) and normal-hearing (NH) children. 5 NH children and 5 children who received a sequential bilateral CI were assessed by means of High-Resolution EEG with Source Reconstruction as they watched a musical cartoon. Implanted children were tested before and after the second implant. For each subject the scalp Power Spectral Density was calculated in order to investigate the EEG alpha asymmetry. The scalp topographic distribution of the EEG power spectrum in the alpha band was different in children using one CI as compared to NH children (see figure). With two CIs the cortical activation pattern changed significantly, becoming more similar to the one observed in NH children. The findings support the hypothesis that bilateral CI users have a closer-to-normal perception of the pleasantness of music than unilaterally implanted children.
Yeend, Ingrid; Beach, Elizabeth Francis; Sharma, Mridula; Dillon, Harvey
2017-09-01
Recent animal research has shown that exposure to single episodes of intense noise causes cochlear synaptopathy without affecting hearing thresholds. It has been suggested that the same may occur in humans. If so, it is hypothesized that this would result in impaired encoding of sound and lead to difficulties hearing at suprathreshold levels, particularly in challenging listening environments. The primary aim of this study was to investigate the effect of noise exposure on auditory processing, including the perception of speech in noise, in adult humans. A secondary aim was to explore whether musical training might improve some aspects of auditory processing and thus counteract or ameliorate any negative impacts of noise exposure. In a sample of 122 participants (63 female) aged 30-57 years with normal or near-normal hearing thresholds, we conducted audiometric tests, including tympanometry, audiometry, acoustic reflexes, otoacoustic emissions and medial olivocochlear responses. We also assessed temporal and spectral processing, by determining thresholds for detection of amplitude modulation and temporal fine structure. We assessed speech-in-noise perception, and conducted tests of attention, memory and sentence closure. We also calculated participants' accumulated lifetime noise exposure and administered questionnaires to assess self-reported listening difficulty and musical training. The results showed no clear link between participants' lifetime noise exposure and performance on any of the auditory processing or speech-in-noise tasks. Musical training was associated with better performance on the auditory processing tasks, but not the on the speech-in-noise perception tasks. The results indicate that sentence closure skills, working memory, attention, extended high frequency hearing thresholds and medial olivocochlear suppression strength are important factors that are related to the ability to process speech in noise. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Lewis, Dawna; Schmid, Kendra; O'Leary, Samantha; Spalding, Jody; Heinrichs-Graham, Elizabeth; High, Robin
2016-01-01
Purpose: This study examined the effects of stimulus type and hearing status on speech recognition and listening effort in children with normal hearing (NH) and children with mild bilateral hearing loss (MBHL) or unilateral hearing loss (UHL). Method Children (5-12 years of age) with NH (Experiment 1) and children (8-12 years of age) with MBHL,…
Jin, Huiyuan; Liu, Haitao
2016-01-01
Deaf or hard-of-hearing individuals usually face a greater challenge to learn to write than their normal-hearing counterparts. Due to the limitations of traditional research methods focusing on microscopic linguistic features, a holistic characterization of the writing linguistic features of these language users is lacking. This study attempts to fill this gap by adopting the methodology of linguistic complex networks. Two syntactic dependency networks are built in order to compare the macroscopic linguistic features of deaf or hard-of-hearing students and those of their normal-hearing peers. One is transformed from a treebank of writing produced by Chinese deaf or hard-of-hearing students, and the other from a treebank of writing produced by their Chinese normal-hearing counterparts. Two major findings are obtained through comparison of the statistical features of the two networks. On the one hand, both linguistic networks display small-world and scale-free network structures, but the network of the normal-hearing students' exhibits a more power-law-like degree distribution. Relevant network measures show significant differences between the two linguistic networks. On the other hand, deaf or hard-of-hearing students tend to have a lower language proficiency level in both syntactic and lexical aspects. The rigid use of function words and a lower vocabulary richness of the deaf or hard-of-hearing students may partially account for the observed differences. PMID:27920733
Examination of the neighborhood activation theory in normal and hearing-impaired listeners.
Dirks, D D; Takayanagi, S; Moshfegh, A; Noffsinger, P D; Fausti, S A
2001-02-01
Experiments were conducted to examine the effects of lexical information on word recognition among normal hearing listeners and individuals with sensorineural hearing loss. The lexical factors of interest were incorporated in the Neighborhood Activation Model (NAM). Central to this model is the concept that words are recognized relationally in the context of other phonemically similar words. NAM suggests that words in the mental lexicon are organized into similarity neighborhoods and the listener is required to select the target word from competing lexical items. Two structural characteristics of similarity neighborhoods that influence word recognition have been identified; "neighborhood density" or the number of phonemically similar words (neighbors) for a particular target item and "neighborhood frequency" or the average frequency of occurrence of all the items within a neighborhood. A third lexical factor, "word frequency" or the frequency of occurrence of a target word in the language, is assumed to optimize the word recognition process by biasing the system toward choosing a high frequency over a low frequency word. Three experiments were performed. In the initial experiments, word recognition for consonant-vowel-consonant (CVC) monosyllables was assessed in young normal hearing listeners by systematically partitioning the items into the eight possible lexical conditions that could be created by two levels of the three lexical factors, word frequency (high and low), neighborhood density (high and low), and average neighborhood frequency (high and low). Neighborhood structure and word frequency were estimated computationally using a large, on-line lexicon-based Webster's Pocket Dictionary. From this program 400 highly familiar, monosyllables were selected and partitioned into eight orthogonal lexical groups (50 words/group). The 400 words were presented randomly to normal hearing listeners in speech-shaped noise (Experiment 1) and "in quiet" (Experiment 2) as well as to an elderly group of listeners with sensorineural hearing loss in the speech-shaped noise (Experiment 3). The results of three experiments verified predictions of NAM in both normal hearing and hearing-impaired listeners. In each experiment, words from low density neighborhoods were recognized more accurately than those from high density neighborhoods. The presence of high frequency neighbors (average neighborhood frequency) produced poorer recognition performance than comparable conditions with low frequency neighbors. Word frequency was found to have a highly significant effect on word recognition. Lexical conditions with high word frequencies produced higher performance scores than conditions with low frequency words. The results supported the basic tenets of NAM theory and identified both neighborhood structural properties and word frequency as significant lexical factors affecting word recognition when listening in noise and "in quiet." The results of the third experiment permit extension of NAM theory to individuals with sensorineural hearing loss. Future development of speech recognition tests should allow for the effects of higher level cognitive (lexical) factors on lower level phonemic processing.
Marcus, Sonya; Whitlow, Christopher T; Koonce, James; Zapadka, Michael E; Chen, Michael Y; Williams, Daniel W; Lewis, Meagan; Evans, Adele K
2014-02-01
Prior studies have associated gross inner ear abnormalities with pediatric sensorineural hearing loss (SNHL) using computed tomography (CT). No studies to date have specifically investigated morphologic inner ear abnormalities involving the contralateral unaffected ear in patients with unilateral SNHL. The purpose of this study is to evaluate contralateral inner ear structures of subjects with unilateral SNHL but no grossly abnormal findings on CT. IRB-approved retrospective analysis of pediatric temporal bone CT scans. 97 temporal bone CT scans, previously interpreted as "normal" based upon previously accepted guidelines by board certified neuroradiologists, were assessed using 12 measurements of the semicircular canals, cochlea and vestibule. The control-group consisted of 72 "normal" temporal bone CTs with underlying SNHL in the subject excluded. The study-group consisted of 25 normal-hearing contralateral temporal bones in subjects with unilateral SNHL. Multivariate analysis of covariance (MANCOVA) was then conducted to evaluate for differences between the study and control group. Cochlea basal turn lumen width was significantly greater in magnitude and central lucency of the lateral semicircular canal bony island was significantly lower in density for audiometrically normal ears of subjects with unilateral SNHL compared to controls. Abnormalities of the inner ear were present in the contralateral audiometrically normal ears of subjects with unilateral SNHL. These data suggest that patients with unilateral SNHL may have a more pervasive disease process that results in abnormalities of both ears. The findings of a cochlea basal turn lumen width disparity >5% from "normal" and/or a lateral semicircular canal bony island central lucency disparity of >5% from "normal" may indicate inherent risk to the contralateral unaffected ear in pediatric patients with unilateral sensorineural hearing loss. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Lewis, James W.; Frum, Chris; Brefczynski-Lewis, Julie A.; Talkington, William J.; Walker, Nathan A.; Rapuano, Kristina M.; Kovach, Amanda L.
2012-01-01
Both sighted and blind individuals can readily interpret meaning behind everyday real-world sounds. In sighted listeners, we previously reported that regions along the bilateral posterior superior temporal sulci (pSTS) and middle temporal gyri (pMTG) are preferentially activated when presented with recognizable action sounds. These regions have generally been hypothesized to represent primary loci for complex motion processing, including visual biological motion processing and audio-visual integration. However, it remained unclear whether, or to what degree, life-long visual experience might impact functions related to hearing perception or memory of sound-source actions. Using functional magnetic resonance imaging (fMRI), we compared brain regions activated in congenitally blind versus sighted listeners in response to hearing a wide range of recognizable human-produced action sounds (excluding vocalizations) versus unrecognized, backward-played versions of those sounds. Here we show that recognized human action sounds commonly evoked activity in both groups along most of the left pSTS/pMTG complex, though with relatively greater activity in the right pSTS/pMTG by the blind group. These results indicate that portions of the postero-lateral temporal cortices contain domain-specific hubs for biological and/or complex motion processing independent of sensory-modality experience. Contrasting the two groups, the sighted listeners preferentially activated bilateral parietal plus medial and lateral frontal networks, while the blind listeners preferentially activated left anterior insula plus bilateral anterior calcarine and medial occipital regions, including what would otherwise have been visual-related cortex. These global-level network differences suggest that blind and sighted listeners may preferentially use different memory retrieval strategies when attempting to recognize action sounds. PMID:21305666
A gradient in cortical pathology in multiple sclerosis by in vivo quantitative 7 T imaging
Louapre, Céline; Govindarajan, Sindhuja T.; Giannì, Costanza; Nielsen, A. Scott; Cohen-Adad, Julien; Sloane, Jacob; Kinkel, Revere P.
2015-01-01
We used a surface-based analysis of T2* relaxation rates at 7 T magnetic resonance imaging, which allows sampling quantitative T2* throughout the cortical width, to map in vivo the spatial distribution of intracortical pathology in multiple sclerosis. Ultra-high resolution quantitative T2* maps were obtained in 10 subjects with clinically isolated syndrome/early multiple sclerosis (≤3 years disease duration), 18 subjects with relapsing-remitting multiple sclerosis (≥4 years disease duration), 13 subjects with secondary progressive multiple sclerosis, and in 17 age-matched healthy controls. Quantitative T2* maps were registered to anatomical cortical surfaces for sampling T2* at 25%, 50% and 75% depth from the pial surface. Differences in laminar quantitative T2* between each patient group and controls were assessed using general linear model (P < 0.05 corrected for multiple comparisons). In all 41 multiple sclerosis cases, we tested for associations between laminar quantitative T2*, neurological disability, Multiple Sclerosis Severity Score, cortical thickness, and white matter lesions. In patients, we measured, T2* in intracortical lesions and in the intracortical portion of leukocortical lesions visually detected on 7 T scans. Cortical lesional T2* was compared with patients’ normal-appearing cortical grey matter T2* (paired t-test) and with mean cortical T2* in controls (linear regression using age as nuisance factor). Subjects with multiple sclerosis exhibited relative to controls, independent from cortical thickness, significantly increased T2*, consistent with cortical myelin and iron loss. In early disease, T2* changes were focal and mainly confined at 25% depth, and in cortical sulci. In later disease stages T2* changes involved deeper cortical laminae, multiple cortical areas and gyri. In patients, T2* in intracortical and leukocortical lesions was increased compared with normal-appearing cortical grey matter (P < 10−10 and P < 10−7), and mean cortical T2* in controls (P < 10−5 and P < 10−6). In secondary progressive multiple sclerosis, T2* in normal-appearing cortical grey matter was significantly increased relative to controls (P < 0.001). Laminar T2* changes may, thus, result from cortical pathology within and outside focal cortical lesions. Neurological disability and Multiple Sclerosis Severity Score correlated each with the degree of laminar quantitative T2* changes, independently from white matter lesions, the greatest association being at 25% depth, while they did not correlate with cortical thickness and volume. These findings demonstrate a gradient in the expression of cortical pathology throughout stages of multiple sclerosis, which was associated with worse disability and provides in vivo evidence for the existence of a cortical pathological process driven from the pial surface. PMID:25681411
Pinnock, Farena; Parlar, Melissa; Hawco, Colin; Hanford, Lindsay; Hall, Geoffrey B.
2017-01-01
This study assessed whether cortical thickness across the brain and regionally in terms of the default mode, salience, and central executive networks differentiates schizophrenia patients and healthy controls with normal range or below-normal range cognitive performance. Cognitive normality was defined using the MATRICS Consensus Cognitive Battery (MCCB) composite score (T = 50 ± 10) and structural magnetic resonance imaging was used to generate cortical thickness data. Whole brain analysis revealed that cognitively normal range controls (n = 39) had greater cortical thickness than both cognitively normal (n = 17) and below-normal range (n = 49) patients. Cognitively normal controls also demonstrated greater thickness than patients in regions associated with the default mode and salience, but not central executive networks. No differences on any thickness measure were found between cognitively normal range and below-normal range controls (n = 24) or between cognitively normal and below-normal range patients. In addition, structural covariance between network regions was high and similar across subgroups. Positive and negative symptom severity did not correlate with thickness values. Cortical thinning across the brain and regionally in relation to the default and salience networks may index shared aspects of the psychotic psychopathology that defines schizophrenia with no relation to cognitive impairment. PMID:28348889
Behavioral training enhances cortical temporal processing in neonatally deafened juvenile cats
Vollmer, Maike; Raggio, Marcia W.; Schreiner, Christoph E.
2011-01-01
Deaf humans implanted with a cochlear prosthesis depend largely on temporal cues for speech recognition because spectral information processing is severely impaired. Training with a cochlear prosthesis is typically required before speech perception shows improvement, suggesting that relevant experience modifies temporal processing in the central auditory system. We tested this hypothesis in neonatally deafened cats by comparing temporal processing in the primary auditory cortex (AI) of cats that received only chronic passive intracochlear electric stimulation (ICES) with cats that were also trained with ICES to detect temporally challenging trains of electric pulses. After months of chronic passive stimulation and several weeks of detection training in behaviorally trained cats, multineuronal AI responses evoked by temporally modulated ICES were recorded in anesthetized animals. The stimulus repetition rates that produced the maximum number of phase-locked spikes (best repetition rate) and 50% cutoff rate were significantly higher in behaviorally trained cats than the corresponding rates in cats that received only chronic passive ICES. Behavioral training restored neuronal temporal following ability to levels comparable with those recorded in naïve prior normal-hearing adult deafened animals. Importantly, best repetitition rates and cutoff rates were highest for neuronal clusters activated by the electrode configuration used in behavioral training. These results suggest that neuroplasticity in the AI is induced by behavioral training and perceptual learning in animals deprived of ordinary auditory experience during development and indicate that behavioral training can ameliorate or restore temporal processing in the AI of profoundly deaf animals. PMID:21543753
Lewis, Dawna; Schmid, Kendra; O'Leary, Samantha; Spalding, Jody; Heinrichs-Graham, Elizabeth; High, Robin
2016-10-01
This study examined the effects of stimulus type and hearing status on speech recognition and listening effort in children with normal hearing (NH) and children with mild bilateral hearing loss (MBHL) or unilateral hearing loss (UHL). Children (5-12 years of age) with NH (Experiment 1) and children (8-12 years of age) with MBHL, UHL, or NH (Experiment 2) performed consonant identification and word and sentence recognition in background noise. Percentage correct performance and verbal response time (VRT) were assessed (onset time, total duration). In general, speech recognition improved as signal-to-noise ratio (SNR) increased both for children with NH and children with MBHL or UHL. The groups did not differ on measures of VRT. Onset times were longer for incorrect than for correct responses. For correct responses only, there was a general increase in VRT with decreasing SNR. Findings indicate poorer sentence recognition in children with NH and MBHL or UHL as SNR decreases. VRT results suggest that greater effort was expended when processing stimuli that were incorrectly identified. Increasing VRT with decreasing SNR for correct responses also supports greater effort in poorer acoustic conditions. The absence of significant hearing status differences suggests that VRT was not differentially affected by MBHL, UHL, or NH for children in this study.
Cochlear compression: perceptual measures and implications for normal and impaired hearing.
Oxenham, Andrew J; Bacon, Sid P
2003-10-01
This article provides a review of recent developments in our understanding of how cochlear nonlinearity affects sound perception and how a loss of the nonlinearity associated with cochlear hearing impairment changes the way sounds are perceived. The response of the healthy mammalian basilar membrane (BM) to sound is sharply tuned, highly nonlinear, and compressive. Damage to the outer hair cells (OHCs) results in changes to all three attributes: in the case of total OHC loss, the response of the BM becomes broadly tuned and linear. Many of the differences in auditory perception and performance between normal-hearing and hearing-impaired listeners can be explained in terms of these changes in BM response. Effects that can be accounted for in this way include poorer audiometric thresholds, loudness recruitment, reduced frequency selectivity, and changes in apparent temporal processing. All these effects can influence the ability of hearing-impaired listeners to perceive speech, especially in complex acoustic backgrounds. A number of behavioral methods have been proposed to estimate cochlear nonlinearity in individual listeners. By separating the effects of cochlear nonlinearity from other aspects of hearing impairment, such methods may contribute towards identifying the different physiological mechanisms responsible for hearing loss in individual patients. This in turn may lead to more accurate diagnoses and more effective hearing-aid fitting for individual patients. A remaining challenge is to devise a behavioral measure that is sufficiently accurate and efficient to be used in a clinical setting.
Measuring listening effort: driving simulator vs. simple dual-task paradigm
Wu, Yu-Hsiang; Aksan, Nazan; Rizzo, Matthew; Stangl, Elizabeth; Zhang, Xuyang; Bentler, Ruth
2014-01-01
Objectives The dual-task paradigm has been widely used to measure listening effort. The primary objectives of the study were to (1) investigate the effect of hearing aid amplification and a hearing aid directional technology on listening effort measured by a complicated, more real world dual-task paradigm, and (2) compare the results obtained with this paradigm to a simpler laboratory-style dual-task paradigm. Design The listening effort of adults with hearing impairment was measured using two dual-task paradigms, wherein participants performed a speech recognition task simultaneously with either a driving task in a simulator or a visual reaction-time task in a sound-treated booth. The speech materials and road noises for the speech recognition task were recorded in a van traveling on the highway in three hearing aid conditions: unaided, aided with omni directional processing (OMNI), and aided with directional processing (DIR). The change in the driving task or the visual reaction-time task performance across the conditions quantified the change in listening effort. Results Compared to the driving-only condition, driving performance declined significantly with the addition of the speech recognition task. Although the speech recognition score was higher in the OMNI and DIR conditions than in the unaided condition, driving performance was similar across these three conditions, suggesting that listening effort was not affected by amplification and directional processing. Results from the simple dual-task paradigm showed a similar trend: hearing aid technologies improved speech recognition performance, but did not affect performance in the visual reaction-time task (i.e., reduce listening effort). The correlation between listening effort measured using the driving paradigm and the visual reaction-time task paradigm was significant. The finding showing that our older (56 to 85 years old) participants’ better speech recognition performance did not result in reduced listening effort was not consistent with literature that evaluated younger (approximately 20 years old), normal hearing adults. Because of this, a follow-up study was conducted. In the follow-up study, the visual reaction-time dual-task experiment using the same speech materials and road noises was repeated on younger adults with normal hearing. Contrary to findings with older participants, the results indicated that the directional technology significantly improved performance in both speech recognition and visual reaction-time tasks. Conclusions Adding a speech listening task to driving undermined driving performance. Hearing aid technologies significantly improved speech recognition while driving, but did not significantly reduce listening effort. Listening effort measured by dual-task experiments using a simulated real-world driving task and a conventional laboratory-style task was generally consistent. For a given listening environment, the benefit of hearing aid technologies on listening effort measured from younger adults with normal hearing may not be fully translated to older listeners with hearing impairment. PMID:25083599
ERIC Educational Resources Information Center
Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels Henrik; Dau, Torsten
2014-01-01
Purpose: Frequency fluctuations in human voices can usually be described as coherent frequency modulation (FM). As listeners with hearing impairment (HI listeners) are typically less sensitive to FM than listeners with normal hearing (NH listeners), this study investigated whether hearing loss affects the perception of a sung vowel based on FM…
Dong, Junzi; Colburn, H. Steven
2016-01-01
In multisource, “cocktail party” sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem. PMID:26866056
Dong, Junzi; Colburn, H Steven; Sen, Kamal
2016-01-01
In multisource, "cocktail party" sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem.
Long-latency auditory evoked potentials with verbal and nonverbal stimuli.
Oppitz, Sheila Jacques; Didoné, Dayane Domeneghini; Silva, Débora Durigon da; Gois, Marjana; Folgearini, Jordana; Ferreira, Geise Corrêa; Garcia, Michele Vargas
2015-01-01
Long-latency auditory evoked potentials represent the cortical activity related to attention, memory, and auditory discrimination skills. Acoustic signal processing occurs differently between verbal and nonverbal stimuli, influencing the latency and amplitude patterns. To describe the latencies of the cortical potentials P1, N1, P2, N2, and P3, as well as P3 amplitude, with different speech stimuli and tone bursts, and to classify them in the presence and absence of these data. A total of 30 subjects with normal hearing were assessed, aged 18-32 years old, matched by gender. Nonverbal stimuli were used (tone burst; 1000Hz - frequent and 4000Hz - rare); and verbal (/ba/ - frequent; /ga/, /da/, and /di/ - rare). Considering the component N2 for tone burst, the lowest latency found was 217.45ms for the BA/DI stimulus; the highest latency found was 256.5ms. For the P3 component, the shortest latency with tone burst stimuli was 298.7 with BA/GA stimuli, the highest, was 340ms. For the P3 amplitude, there was no statistically significant difference among the different stimuli. For latencies of components P1, N1, P2, N2, P3, there were no statistical differences among them, regardless of the stimuli used. There was a difference in the latency of potentials N2 and P3 among the stimuli employed but no difference was observed for the P3 amplitude. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Effect of occlusion, directionality and age on horizontal localization
NASA Astrophysics Data System (ADS)
Alworth, Lynzee Nicole
Localization acuity of a given listener is dependent upon the ability discriminate between interaural time and level disparities. Interaural time differences are encoded by low frequency information whereas interaural level differences are encoded by high frequency information. Much research has examined effects of hearing aid microphone technologies and occlusion separately and prior studies have not evaluated age as a factor in localization acuity. Open-fit hearing instruments provide new earmold technologies and varying microphone capabilities; however, these instruments have yet to be evaluated with regard to horizontal localization acuity. Thus, the purpose of this study is to examine the effects of microphone configuration, type of dome in open-fit hearing instruments, and age on the horizontal localization ability of a given listener. Thirty adults participated in this study and were grouped based upon hearing sensitivity and age (young normal hearing, >50 years normal hearing, >50 hearing impaired). Each normal hearing participant completed one localization experiment (unaided/unamplified) where they listened to the stimulus "Baseball" and selected the point of origin. Hearing impaired listeners were fit with the same two receiver-in-the-ear hearing aids and same dome types, thus controlling for microphone technologies, type of dome, and fitting between trials. Hearing impaired listeners completed a total of 7 localization experiments (unaided/unamplified; open dome: omnidirectional, adaptive directional, fixed directional; micromold: omnidirectional, adaptive directional, fixed directional). Overall, results of this study indicate that age significantly affects horizontal localization ability as younger adult listeners with normal hearing made significantly fewer localization errors than older adult listeners with normal hearing. Also, results revealed a significant difference in performance between dome type; however, upon further examination was not significant. Therefore, results examining type of dome should be viewed with caution. Results examining microphone configuration and microphone configuration by dome type were not significant. Moreover, results evaluating performance relative to unaided (unamplified) were not significant. Taken together, these results suggest open-fit hearing instruments, regardless of microphone or dome type, do not degrade horizontal localization acuity within a given listener relative to their 'older aged' normal hearing counterparts in quiet environments.
Numbenjapon, N; Costin, G; Pitukcheewanont, P
2012-09-01
We assessed bone size and bone density (BD) measurements using computed tomography (CT) in children and adolescents with hyperthyroidism treated with antithyroid medication. We found that cortical BD appeared to improve at 1 year and normalize at 2 years in all tested patients. Our previous study demonstrated that cortical BD in children and adolescents with untreated hyperthyroidism was significantly decreased as compared to age-, sex- and ethnicity-matched healthy controls. The present report evaluated whether attainment of euthyroidism by medical antithyroid treatment was able to improve or normalize cortical BD in these patients. Anthropometrics and three-dimensional CT bone measurements including cross-sectional area (CSA), cortical bone area (CBA) and cortical BD at midshaft of the femur (cortical bone), and CSA and BD of L(1) to L(3) vertebrae (cancellous bone) in 15 children and adolescents after 1- and 2-year treatments with antithyroid medication were reviewed and compared to their pretreatment results. All patients were euthyroid at 1 and 2 years after medical antithyroid treatment. After adjusting for age, height, weight and Tanner stage, a significant increase in cortical BD in all patients (15/15) was found after 1 year of treatment (P < 0.001). Normalization of cortical BD was demonstrated in all tested patients (10/15) after 2 years. There were no significant changes in the other cancellous or cortical bone parameters. Cortical BD was improved at 1 year and normalized at 2 years in hyperthyroid patients rendered euthyroid with antithyroid medication.
[Emotional response to music by postlingually-deafened adult cochlear implant users].
Wang, Shuo; Dong, Ruijuan; Zhou, Yun; Li, Jing; Qi, Beier; Liu, Bo
2012-10-01
To assess the emotional response to music by postlingually-deafened adult cochlear implant users. Munich music questionnaire (MUMU) was used to match the music experience and the motivation of use of music between 12 normal-hearing and 12 cochlear implant subjects. Emotion rating test in Musical Sounds in Cochlear Implants (MuSIC) test battery was used to assess the emotion perception ability for both normal-hearing and cochlear implant subjects. A total of 15 pieces of music phases were used. Responses were given by selecting the rating scales from 1 to 10. "1" represents "very sad" feeling, and "10" represents "very happy feeling. In comparison with normal-hearing subjects, 12 cochlear implant subjects made less active use of music for emotional purpose. The emotion ratings for cochlear implant subjects were similar to normal-hearing subjects, but with large variability. Post-lingually deafened cochlear implant subjects on average performed similarly in emotion rating tasks relative to normal-hearing subjects, but their active use of music for emotional purpose was obviously less than normal-hearing subjects.
Silva, Liliane Aparecida Fagundes; Couto, Maria Inês Vieira; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; de Carvalho, Ana Claudia Martinho; Matas, Carla Gentile
2015-01-01
The purpose of this study was to longitudinally assess the behavioral and electrophysiological hearing changes of a girl inserted in a CI program, who had bilateral profound sensorineural hearing loss and underwent surgery of cochlear implantation with electrode activation at 21 months of age. She was evaluated using the P1 component of Long Latency Auditory Evoked Potential (LLAEP); speech perception tests of the Glendonald Auditory Screening Procedure (GASP); Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS); and Meaningful Use of Speech Scales (MUSS). The study was conducted prior to activation and after three, nine, and 18 months of cochlear implant activation. The results of the LLAEP were compared with data from a hearing child matched by gender and chronological age. The results of the LLAEP of the child with cochlear implant showed gradual decrease in latency of the P1 component after auditory stimulation (172 ms–134 ms). In the GASP, IT-MAIS, and MUSS, gradual development of listening skills and oral language was observed. The values of the LLAEP of the hearing child were expected for chronological age (132 ms–128 ms). The use of different clinical instruments allow a better understanding of the auditory habilitation and rehabilitation process via CI. PMID:26881163
Analysis of the relationship between cognitive skills and unilateral sensory hearing loss.
Calderón-Leyva, I; Díaz-Leines, S; Arch-Tirado, E; Lino-González, A L
2018-06-01
To analyse cognitive skills in patients with severe unilateral hearing loss versus those in subjects with normal hearing. 40 adults participated: 20 patients (10 women and 10 men) with severe unilateral hearing loss and 20 healthy subjects matched to the study group. Cognitive abilities were measured with the Spanish version of the Woodcock Johnson Battery-Revised; central auditory processing was assessed with monaural psychoacoustic tests. Box plots were drawn and t tests were performed for samples with a significance of P≤.05. A comparison of performances on the filtered word testing and time-compressed disyllabic word tests between patients and controls revealed a statistically significant difference (P≤.05) with greater variability among responses by hearing impaired subjects. This same group also showed a better cognitive performance on the numbers reversed, visual auditory learning, analysis synthesis, concept formation, and incomplete words tests. Patients with hearing loss performed more poorly than controls on the filtered word and time-compressed disyllabic word tests, but more competently on memory, reasoning, and auditory processing tasks. Complementary tests, such as those assessing central auditory processes and cognitive ability tests, are important and helpful for designing habilitation/rehabilitation and therapeutic strategies intended to optimise and stimulate cognitive skills in subjects with unilateral hearing impairment. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
Brain Volume Differences Associated With Hearing Impairment in Adults
Vriend, Chris; Heslenfeld, Dirk J.; Versfeld, Niek J.; Kramer, Sophia E.
2018-01-01
Speech comprehension depends on the successful operation of a network of brain regions. Processing of degraded speech is associated with different patterns of brain activity in comparison with that of high-quality speech. In this exploratory study, we studied whether processing degraded auditory input in daily life because of hearing impairment is associated with differences in brain volume. We compared T1-weighted structural magnetic resonance images of 17 hearing-impaired (HI) adults with those of 17 normal-hearing (NH) controls using a voxel-based morphometry analysis. HI adults were individually matched with NH adults based on age and educational level. Gray and white matter brain volumes were compared between the groups by region-of-interest analyses in structures associated with speech processing, and by whole-brain analyses. The results suggest increased gray matter volume in the right angular gyrus and decreased white matter volume in the left fusiform gyrus in HI listeners as compared with NH ones. In the HI group, there was a significant correlation between hearing acuity and cluster volume of the gray matter cluster in the right angular gyrus. This correlation supports the link between partial hearing loss and altered brain volume. The alterations in volume may reflect the operation of compensatory mechanisms that are related to decoding meaning from degraded auditory input. PMID:29557274
Koelewijn, Thomas; Versfeld, Niek J; Kramer, Sophia E
2017-10-01
For people with hearing difficulties, following a conversation in a noisy environment requires substantial cognitive processing, which is often perceived as effortful. Recent studies with normal hearing (NH) listeners showed that the pupil dilation response, a measure of cognitive processing load, is affected by 'attention related' processes. How these processes affect the pupil dilation response for hearing impaired (HI) listeners remains unknown. Therefore, the current study investigated the effect of auditory attention on various pupil response parameters for 15 NH adults (median age 51 yrs.) and 15 adults with mild to moderate sensorineural hearing loss (median age 52 yrs.). Both groups listened to two different sentences presented simultaneously, one to each ear and partially masked by stationary noise. Participants had to repeat either both sentences or only one, for which they had to divide or focus attention, respectively. When repeating one sentence, the target sentence location (left or right) was either randomized or blocked across trials, which in the latter case allowed for a better spatial focus of attention. The speech-to-noise ratio was adjusted to yield about 50% sentences correct for each task and condition. NH participants had lower ('better') speech reception thresholds (SRT) than HI participants. The pupil measures showed no between-group effects, with the exception of a shorter peak latency for HI participants, which indicated a shorter processing time. Both groups showed higher SRTs and a larger pupil dilation response when two sentences were processed instead of one. Additionally, SRTs were higher and dilation responses were larger for both groups when the target location was randomized instead of fixed. We conclude that although HI participants could cope with less noise than the NH group, their ability to focus attention on a single talker, thereby improving SRTs and lowering cognitive processing load, was preserved. Shorter peak latencies could indicate that HI listeners adapt their listening strategy by not processing some information, which reduces processing time and thereby listening effort. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Çizmeci, Hülya; Çiprut, Ayça
2018-06-01
This study aimed to (1) evaluate the gap filling skills and reading mistakes of students with cochlear implants, and to (2) compare their results with those of their normal-hearing peers. The effects of implantation age and total time of cochlear implant use were analyzed in relation to the subjects' reading skills development. The study included 19 students who underwent cochlear implantation and 20 students with normal hearing, who were enrolled at the 6th to 8th grades. The subjects' ages ranged between 12 and 14 years old. Their reading skills were evaluated by using the Informal Reading Inventory. A significant relationship were found between implanted and normal-hearing students in terms of the percentages of reading error and the percentages of gap filling scores. The average order of the reading errors of students using cochlear implants was higher than that of normal-hearing students. As for the gap filling, the performances of implanted students in the passage are lower than those of their normal-hearing peers. No significant relationship was found between the variables tested in terms of age and duration of implantation on the reading performances of implanted students. Even if they were early implanted, there were significant differences in the reading performances of implanted students compared with those of their normal-hearing peers in older classes. Copyright © 2018 Elsevier B.V. All rights reserved.
Pulvermüller, Friedemann; Shtyrov, Yury; Hauk, Olaf
2009-08-01
How long does it take the human mind to grasp the idea when hearing or reading a sentence? Neurophysiological methods looking directly at the time course of brain activity indexes of comprehension are critical for finding the answer to this question. As the dominant cognitive approaches, models of serial/cascaded and parallel processing, make conflicting predictions on the time course of psycholinguistic information access, they can be tested using neurophysiological brain activation recorded in MEG and EEG experiments. Seriality and cascading of lexical, semantic and syntactic processes receives support from late (latency approximately 1/2s) sequential neurophysiological responses, especially N400 and P600. However, parallelism is substantiated by early near-simultaneous brain indexes of a range of psycholinguistic processes, up to the level of semantic access and context integration, emerging already 100-250ms after critical stimulus information is present. Crucially, however, there are reliable latency differences of 20-50ms between early cortical area activations reflecting lexical, semantic and syntactic processes, which are left unexplained by current serial and parallel brain models of language. We here offer a mechanistic model grounded in cortical nerve cell circuits that builds upon neuroanatomical and neurophysiological knowledge and explains both near-simultaneous activations and fine-grained delays. A key concept is that of discrete distributed cortical circuits with specific inter-area topographies. The full activation, or ignition, of specifically distributed binding circuits explains the near-simultaneity of early neurophysiological indexes of lexical, syntactic and semantic processing. Activity spreading within circuits determined by between-area conduction delays accounts for comprehension-related regional activation differences in the millisecond range.
Human amygdala activation by the sound produced during dental treatment: A fMRI study.
Yu, Jen-Fang; Lee, Kun-Che; Hong, Hsiang-Hsi; Kuo, Song-Bor; Wu, Chung-De; Wai, Yau-Yau; Chen, Yi-Fen; Peng, Ying-Chin
2015-01-01
During dental treatments, patients may experience negative emotions associated with the procedure. This study was conducted with the aim of using functional magnetic resonance imaging (fMRI) to visualize cerebral cortical stimulation among dental patients in response to auditory stimuli produced by ultrasonic scaling and power suction equipment. Subjects (n = 7) aged 23-35 years were recruited for this study. All were right-handed and underwent clinical pure-tone audiometry testing to reveal a normal hearing threshold below 20 dB hearing level (HL). As part of the study, subjects initially underwent a dental calculus removal treatment. During the treatment, subjects were exposed to ultrasonic auditory stimuli originating from the scaling handpiece and salivary suction instruments. After dental treatment, subjects were imaged with fMRI while being exposed to recordings of the noise from the same dental instrument so that cerebral cortical stimulation in response to aversive auditory stimulation could be observed. The independent sample confirmatory t-test was used. Subjects also showed stimulation in the amygdala and prefrontal cortex, indicating that the ultrasonic auditory stimuli elicited an unpleasant response in the subjects. Patients experienced unpleasant sensations caused by contact stimuli in the treatment procedure. In addition, this study has demonstrated that aversive auditory stimuli such as sounds from the ultrasonic scaling handpiece also cause aversive emotions. This study was indicated by observed stimulation of the auditory cortex as well as the amygdala, indicating that noise from the ultrasonic scaling handpiece was perceived as an aversive auditory stimulus by the subjects. Subjects can experience unpleasant sensations caused by the sounds from the ultrasonic scaling handpiece based on their auditory stimuli.
Human amygdala activation by the sound produced during dental treatment: A fMRI study
Yu, Jen-Fang; Lee, Kun-Che; Hong, Hsiang-Hsi; Kuo, Song-Bor; Wu, Chung-De; Wai, Yau-Yau; Chen, Yi-Fen; Peng, Ying-Chin
2015-01-01
During dental treatments, patients may experience negative emotions associated with the procedure. This study was conducted with the aim of using functional magnetic resonance imaging (fMRI) to visualize cerebral cortical stimulation among dental patients in response to auditory stimuli produced by ultrasonic scaling and power suction equipment. Subjects (n = 7) aged 23-35 years were recruited for this study. All were right-handed and underwent clinical pure-tone audiometry testing to reveal a normal hearing threshold below 20 dB hearing level (HL). As part of the study, subjects initially underwent a dental calculus removal treatment. During the treatment, subjects were exposed to ultrasonic auditory stimuli originating from the scaling handpiece and salivary suction instruments. After dental treatment, subjects were imaged with fMRI while being exposed to recordings of the noise from the same dental instrument so that cerebral cortical stimulation in response to aversive auditory stimulation could be observed. The independent sample confirmatory t-test was used. Subjects also showed stimulation in the amygdala and prefrontal cortex, indicating that the ultrasonic auditory stimuli elicited an unpleasant response in the subjects. Patients experienced unpleasant sensations caused by contact stimuli in the treatment procedure. In addition, this study has demonstrated that aversive auditory stimuli such as sounds from the ultrasonic scaling handpiece also cause aversive emotions. This study was indicated by observed stimulation of the auditory cortex as well as the amygdala, indicating that noise from the ultrasonic scaling handpiece was perceived as an aversive auditory stimulus by the subjects. Subjects can experience unpleasant sensations caused by the sounds from the ultrasonic scaling handpiece based on their auditory stimuli. PMID:26356376
Hearing screening in children with skeletal dysplasia.
Tunkel, David E; Kerbavaz, Richard; Smith, Beth; Rose-Hardison, Danielle; Alade, Yewande; Hoover-Fong, Julie
2011-12-01
To determine the prevalence of hearing loss and abnormal tympanometry in children with skeletal dysplasia. Clinical screening program. National convention of the Little People of America. Convenience sample of volunteers aged 18 years or younger with skeletal dysplasias. Hearing screening with behavioral testing and/or otoacoustic emissions, otoscopy, and tympanometry. A failed hearing screen was defined as hearing 35 dB HL (hearing level) or greater at 1 or more tested frequencies or by a "fail" otoacoustic emissions response. Types B and C tympanograms were considered abnormal. A total of 58 children (aged ≤18 years) with skeletal dysplasia enrolled, and 56 completed hearing screening. Forty-one children had normal hearing (71%); 9 failed in 1 ear (16%); and 6 failed in both ears (10%). Forty-four children had achondroplasia, and 31 had normal hearing in both ears (71%); 8 failed hearing screening in 1 ear (18%), and 3 in both ears (7%). Tympanometry was performed in 45 children, with normal tympanograms found in 21 (47%), bilateral abnormal tympanograms in 15 (33%), and unilateral abnormal tympanograms in 9 (20%). Fourteen children with achondroplasia had normal tympanograms (42%); 11 had bilateral abnormal tympanograms (33%); and 8 had unilateral abnormal tympanograms (24%). For those children without functioning tympanostomy tubes, there was a 9.5 times greater odds of hearing loss if there was abnormal tympanometry (P = .03). Hearing loss and middle-ear disease are both highly prevalent in children with skeletal dysplasias. Abnormal tympanometry is highly associated with the presence of hearing loss, as expected in children with eustachian tube dysfunction. Hearing screening with medical intervention is recommended for these children.
Deconvolution of magnetic acoustic change complex (mACC).
Bardy, Fabrice; McMahon, Catherine M; Yau, Shu Hui; Johnson, Blake W
2014-11-01
The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135ms) and long (1500ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes and shorter latencies might reflect intrinsic properties of the cortical neurons to rapidly presented sounds. This is the first demonstration of the separation of overlapping cortical responses to rapidly changing speech sounds and offers a potential new biomarker of discrimination of rapid transition of sound. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.
Oh, Soo Hee; Donaldson, Gail S.; Kong, Ying-Yee
2016-01-01
Objectives Previous studies have documented the benefits of bimodal hearing as compared with a CI alone, but most have focused on the importance of bottom-up, low-frequency cues. The purpose of the present study was to evaluate the role of top-down processing in bimodal hearing by measuring the effect of sentence context on bimodal benefit for temporally interrupted sentences. It was hypothesized that low-frequency acoustic cues would facilitate the use of contextual information in the interrupted sentences, resulting in greater bimodal benefit for the higher context (CUNY) sentences than for the lower context (IEEE) sentences. Design Young normal-hearing listeners were tested in simulated bimodal listening conditions in which noise band vocoded sentences were presented to one ear with or without low-pass (LP) filtered speech or LP harmonic complexes (LPHCs) presented to the contralateral ear. Speech recognition scores were measured in three listening conditions: vocoder-alone, vocoder combined with LP speech, and vocoder combined with LPHCs. Temporally interrupted versions of the CUNY and IEEE sentences were used to assess listeners’ ability to fill in missing segments of speech by using top-down linguistic processing. Sentences were square-wave gated at a rate of 5 Hz with a 50 percent duty cycle. Three vocoder channel conditions were tested for each type of sentence (8, 12, and 16 channels for CUNY; 12, 16, and 32 channels for IEEE) and bimodal benefit was compared for similar amounts of spectral degradation (matched-channel comparisons) and similar ranges of baseline performance. Two gain measures, percentage-point gain and normalized gain, were examined. Results Significant effects of context on bimodal benefit were observed when LP speech was presented to the residual-hearing ear. For the matched-channel comparisons, CUNY sentences showed significantly higher normalized gains than IEEE sentences for both the 12-channel (20 points higher) and 16-channel (18 points higher) conditions. For the individual gain comparisons that used a similar range of baseline performance, CUNY sentences showed bimodal benefits that were significantly higher (7 percentage points, or 15 points normalized gain) than those for IEEE sentences. The bimodal benefits observed here for temporally interrupted speech were considerably smaller than those observed in an earlier study that used continuous speech (Kong et al., 2015). Further, unlike previous findings for continuous speech, no bimodal benefit was observed when LPHCs were presented to the LP ear. Conclusions Findings indicate that linguistic context has a significant influence on bimodal benefit for temporally interrupted speech and support the hypothesis that low-frequency acoustic information presented to the residual-hearing ear facilitates the use of top-down linguistic processing in bimodal hearing. However, bimodal benefit is reduced for temporally interrupted speech as compared to continuous speech, suggesting that listeners’ ability to restore missing speech information depends not only on top-down linguistic knowledge, but also on the quality of the bottom-up sensory input. PMID:27007220
Kar, Sudipta; Kundu, Goutam; Maiti, Shyamal Kumar; Ghosh, Chiranjit; Bazmi, Badruddin Ahamed; Mukhopadhyay, Santanu
2016-01-01
Dental caries is one of the major modern-day diseases of dental hard tissue. It may affect both normal and hearing-impaired children. This study is aimed to evaluate and compare the prevalence of dental caries in hearing-impaired and normal children of Malda, West Bengal, utilizing the Caries Assessment Spectrum and Treatment (CAST). In a cross-sectional, case-control study of dental caries status of 6-12-year-old children was assessed. Statistically significant difference was found in studied (hearing-impaired) and control group (normal children). In the present study, caries affected hearing-impaired children found to be about 30.51% compared to 15.81% in normal children, and the result was statistically significant. Regarding individual caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group, and the result is significant at P < 0.05. Statistical analysis was carried out utilizing Z-test. Statistically significant difference was found in studied (hearing-impaired) and control group (normal children). In the present study, caries effected hearing-impaired children found about 30.51% instead of 15.81% in normal children, and the result was statistically significant (P < 0.05). Regarding individual caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group. Dental health of hearing-impaired children was found unsatisfactory than normal children when studied in relation to dental caries status evaluated with CAST.
Improving flexible thinking in deaf and hard of hearing children with virtual reality technology.
Passig, D; Eden, S
2000-07-01
The study investigated whether rotating three-dimensional (3-D) objects using virtual reality (VR) will affect flexible thinking in deaf and hard of hearing children. Deaf and hard of hearing subjects were distributed into experimental and control groups. The experimental group played virtual 3-D Tetris (a game using VR technology) individually, 15 minutes once weekly over 3 months. The control group played conventional two-dimensional (2-D) Tetris over the same period. Children with normal hearing participated as a second control group in order to establish whether deaf and hard of hearing children really are disadvantaged in flexible thinking. Before-and-after testing showed significantly improved flexible thinking in the experimental group; the deaf and hard of hearing control group showed no significant improvement. Also, before the experiment, the deaf and hard of hearing children scored lower in flexible thinking than the children with normal hearing. After the experiment, the difference between the experimental group and the control group of children with normal hearing was smaller.
Todd, Ann E.; Goupell, Matthew J.; Litovsky, Ruth Y.
2016-01-01
Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs. PMID:27475132
Todd, Ann E; Goupell, Matthew J; Litovsky, Ruth Y
2016-07-01
Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs.
Affective Properties of Mothers' Speech to Infants with Hearing Impairment and Cochlear Implants
ERIC Educational Resources Information Center
Kondaurova, Maria V.; Bergeson, Tonya R.; Xu, Huiping; Kitamura, Christine
2015-01-01
Purpose: The affective properties of infant-directed speech influence the attention of infants with normal hearing to speech sounds. This study explored the affective quality of maternal speech to infants with hearing impairment (HI) during the 1st year after cochlear implantation as compared to speech to infants with normal hearing. Method:…
ERIC Educational Resources Information Center
Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta
2014-01-01
Purpose: To investigate gaze behavior during communication between children with hearing impairment (HI) and normal-hearing (NH) peers. Method: Ten HI-NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Using verbal event (questions,…
Souza, Pamela; Arehart, Kathryn; Miller, Christi Wise; Muralimanohar, Ramesh Kumar
2011-02-01
Recent research suggests that older listeners may have difficulty processing information related to the fundamental frequency (F0) of voiced speech. In this study, the focus was on the mechanisms that may underlie this reduced ability. We examined whether increased age resulted in decreased ability to perceive F0 using fine-structure cues provided by the harmonic structure of voiced speech sounds or cues provided by high-rate envelope fluctuations (periodicity). Younger listeners with normal hearing and older listeners with normal to near-normal hearing completed two tasks of F0 perception. In the first task (steady state F0), the fundamental frequency difference limen (F0DL) was measured adaptively for synthetic vowel stimuli. In the second task (time-varying F0), listeners relied on variations in F0 to judge intonation of synthetic diphthongs. For both tasks, three processing conditions were created: eight-channel vocoding that preserved periodicity cues to F0; a simulated electroacoustic stimulation condition, which consisted of high-frequency vocoder processing combined with a low-pass-filtered portion, and offered both periodicity and fine-structure cues to F0; and an unprocessed condition. F0 difference limens for steady state vowel sounds and the ability to discern rising and falling intonations were significantly worse in the older subjects compared with the younger subjects. For both older and younger listeners, scores were lowest for the vocoded condition, and there was no difference in scores between the unprocessed and electroacoustic simulation conditions. Older listeners had difficulty using periodicity cues to obtain information related to talker fundamental frequency. However, performance was improved by combining periodicity cues with (low frequency) acoustic information, and that strategy should be considered in individuals who are appropriate candidates for such processing. For cochlear implant candidates, this effect might be achieved by partial electrode insertion providing acoustic stimulation in the low frequencies or by the combination of a traditional implant in one ear and a hearing aid in the opposite ear.
Atypical incus necrosis: a case report and literature review.
Choudhury, N; Kumar, G; Krishnan, M; Gatland, D J
2008-10-01
We report an atypical case of ossicular necrosis affecting the incus, in the absence of any history of chronic serous otitis media. We also discuss the current theories of incus necrosis. A male patient presented with a history of right unilateral hearing loss and tinnitus. Audiometry confirmed right conductive deafness; tympanometry was normal bilaterally. He underwent a right exploratory tympanotomy, which revealed atypical erosion of the proximal long process of the incus. Middle-ear examination was otherwise normal, with a mobile stapes footplate. The redundant long process of the incus was excised and a partial ossicular replacement prosthesis was inserted, resulting in improved hearing. Ossicular pathologies most commonly affect the incus. The commonest defect is an absent lenticular and distal long process of the incus, which is most commonly associated with chronic otitis media. This is the first reported case of ossicular necrosis, particularly of the proximal long process of the incus, in the absence of chronic middle-ear pathology.
Story retelling skills in Persian speaking hearing-impaired children.
Jarollahi, Farnoush; Mohamadi, Reyhane; Modarresi, Yahya; Agharasouli, Zahra; Rahimzadeh, Shadi; Ahmadi, Tayebeh; Keyhani, Mohammad-Reza
2017-05-01
Since the pragmatic skills of hearing-impaired Persian-speaking children have not yet been investigated particularly through story retelling, this study aimed to evaluate some pragmatic abilities of normal-hearing and hearing-impaired children using a story retelling test. 15 normal-hearing and 15 profound hearing-impaired 7-year-old children were evaluated using the story retelling test with the content validity of 89%, construct validity of 85%, and reliability of 83%. Three macro structure criteria including topic maintenance, event sequencing, explicitness, and four macro structure criteria including referencing, conjunctive cohesion, syntax complexity, and utterance length were assessed. The test was performed with live voice in a quiet room where children were then asked to retell the story. The tasks of the children were recorded on a tape, transcribed, scored and analyzed. In the macro structure criteria, utterances of hearing-impaired students were less consistent, enough information was not given to listeners to have a full understanding of the subject, and the story events were less frequently expressed in a rational order than those of normal-hearing group (P < 0.0001). Regarding the macro structure criteria of the test, unlike the normal-hearing students who obtained high scores, hearing-impaired students failed to gain any scores on the items of this section. These results suggest that Hearing-impaired children were not able to use language as effectively as their hearing peers, and they utilized quite different pragmatic functions. Copyright © 2017 Elsevier B.V. All rights reserved.
Acoustic properties of naturally produced clear speech at normal speaking rates
NASA Astrophysics Data System (ADS)
Krause, Jean C.; Braida, Louis D.
2004-01-01
Sentences spoken ``clearly'' are significantly more intelligible than those spoken ``conversationally'' for hearing-impaired listeners in a variety of backgrounds [Picheny et al., J. Speech Hear. Res. 28, 96-103 (1985); Uchanski et al., ibid. 39, 494-509 (1996); Payton et al., J. Acoust. Soc. Am. 95, 1581-1592 (1994)]. While producing clear speech, however, talkers often reduce their speaking rate significantly [Picheny et al., J. Speech Hear. Res. 29, 434-446 (1986); Uchanski et al., ibid. 39, 494-509 (1996)]. Yet speaking slowly is not solely responsible for the intelligibility benefit of clear speech (over conversational speech), since a recent study [Krause and Braida, J. Acoust. Soc. Am. 112, 2165-2172 (2002)] showed that talkers can produce clear speech at normal rates with training. This finding suggests that clear speech has inherent acoustic properties, independent of rate, that contribute to improved intelligibility. Identifying these acoustic properties could lead to improved signal processing schemes for hearing aids. To gain insight into these acoustical properties, conversational and clear speech produced at normal speaking rates were analyzed at three levels of detail (global, phonological, and phonetic). Although results suggest that talkers may have employed different strategies to achieve clear speech at normal rates, two global-level properties were identified that appear likely to be linked to the improvements in intelligibility provided by clear/normal speech: increased energy in the 1000-3000-Hz range of long-term spectra and increased modulation depth of low frequency modulations of the intensity envelope. Other phonological and phonetic differences associated with clear/normal speech include changes in (1) frequency of stop burst releases, (2) VOT of word-initial voiceless stop consonants, and (3) short-term vowel spectra.
Laviolette, Steven R
2007-07-01
The neural regulation of emotional perception, learning, and memory is essential for normal behavioral and cognitive functioning. Many of the symptoms displayed by individuals with schizophrenia may arise from fundamental disturbances in the ability to accurately process emotionally salient sensory information. The neurotransmitter dopamine (DA) and its ability to modulate neural regions involved in emotional learning, perception, and memory formation has received considerable research attention as a potential final common pathway to account for the aberrant emotional regulation and psychosis present in the schizophrenic syndrome. Evidence from both human neuroimaging studies and animal-based research using neurodevelopmental, behavioral, and electrophysiological techniques have implicated the mesocorticolimbic DA circuit as a crucial system for the encoding and expression of emotionally salient learning and memory formation. While many theories have examined the cortical-subcortical interactions between prefrontal cortical regions and subcortical DA substrates, many questions remain as to how DA may control emotional perception and learning and how disturbances linked to DA abnormalities may underlie the disturbed emotional processing in schizophrenia. Beyond the mesolimbic DA system, increasing evidence points to the amygdala-prefrontal cortical circuit as an important processor of emotionally salient information and how neurodevelopmental perturbances within this circuitry may lead to dysregulation of DAergic modulation of emotional processing and learning along this cortical-subcortical emotional processing circuit.
ERIC Educational Resources Information Center
Nelson, Lauri H.; White, Karl R.; Grewe, Jennifer
2012-01-01
The development of proficient communication skills in infants and toddlers is an important component to child development. A popular trend gaining national media attention is teaching sign language to babies with normal hearing whose parents also have normal hearing. Thirty-three websites were identified that advocate sign language for hearing…
Auditory cortical responses in patients with cochlear implants
Burdo, S; Razza, S; Di Berardino, F; Tognola, G
2006-01-01
Summary Currently, the most commonly used electrophysiological tests for cochlear implant evaluation are Averaged Electrical Voltages (AEV), Electrical Advisory Brainstem Responses (EABR) and Neural Response Telemetry (NRT). The present paper focuses on the study of acoustic auditory cortical responses, or slow vertex responses, which are not widely used due to the difficulty in recording, especially in young children. Aims of this study were validation of slow vertex responses and their possible applications in monitoring postimplant results, particularly restoration of hearing and auditory maturation. In practice, the use of tone-bursts, also through hearing aids or cochlear implants, as in slow vertex responses, allows many more frequencies to be investigated and louder intensities to be reached than with other tests based on a click as stimulus. Study design focused on latencies of N1 and P2 slow vertex response peaks in cochlear implants. The study population comprised 45 implant recipients (aged 2 to 70 years), divided into 5 different homogeneous groups according to chronological age, age at onset of deafness, and age at implantation. For each subject, slow vertex responses and free-field auditory responses (PTAS) were recorded for tone-bursts at 500 and 2000 Hz before cochlear implant surgery (using hearing aid amplification) and during scheduled sessions at 3rd and 12th month after implant activation. Results showed that N1 and P2 latencies decreased in all groups starting from 3rd through 12th month after activation. Subjects implanted before school age or at least before age 8 yrs showed the widest latency changes. All subjects showed a reduction in the gap between subjective thresholds (obtained with free field auditory responses) and objective thresholds (obtained with slow vertex responses), obtained in presurgery stage and after cochlear implant. In conclusion, a natural evolution of neurophysiological cortical activities of the auditory pathway, over time, was found especially in young children with prelingual deafness and implanted in preschool age. Cochlear implantation appears to provide hearing restoration, demonstrated by the sharp reduction of the gap between subjective free field auditory responses and slow vertex responses threshold obtained with hearing aids vs. cochlear implant. PMID:16886849
Desjardins, Jamie L
2016-01-01
Older listeners with hearing loss may exert more cognitive resources to maintain a level of listening performance similar to that of younger listeners with normal hearing. Unfortunately, this increase in cognitive load, which is often conceptualized as increased listening effort, may come at the cost of cognitive processing resources that might otherwise be available for other tasks. The purpose of this study was to evaluate the independent and combined effects of a hearing aid directional microphone and a noise reduction (NR) algorithm on reducing the listening effort older listeners with hearing loss expend on a speech-in-noise task. Participants were fitted with study worn commercially available behind-the-ear hearing aids. Listening effort on a sentence recognition in noise task was measured using an objective auditory-visual dual-task paradigm. The primary task required participants to repeat sentences presented in quiet and in a four-talker babble. The secondary task was a digital visual pursuit rotor-tracking test, for which participants were instructed to use a computer mouse to track a moving target around an ellipse that was displayed on a computer screen. Each of the two tasks was presented separately and concurrently at a fixed overall speech recognition performance level of 50% correct with and without the directional microphone and/or the NR algorithm activated in the hearing aids. In addition, participants reported how effortful it was to listen to the sentences in quiet and in background noise in the different hearing aid listening conditions. Fifteen older listeners with mild sloping to severe sensorineural hearing loss participated in this study. Listening effort in background noise was significantly reduced with the directional microphones activated in the hearing aids. However, there was no significant change in listening effort with the hearing aid NR algorithm compared to no noise processing. Correlation analysis between objective and self-reported ratings of listening effort showed no significant relation. Directional microphone processing effectively reduced the cognitive load of listening to speech in background noise. This is significant because it is likely that listeners with hearing impairment will frequently encounter noisy speech in their everyday communications. American Academy of Audiology.
Binaural hearing with electrical stimulation
Kan, Alan; Litovsky, Ruth Y.
2014-01-01
Bilateral cochlear implantation is becoming a standard of care in many clinics. While much benefit has been shown through bilateral implantation, patients who have bilateral cochlear implants (CIs) still do not perform as well as normal hearing listeners in sound localization and understanding speech in noisy environments. This difference in performance can arise from a number of different factors, including the areas of hardware and engineering, surgical precision and pathology of the auditory system in deaf persons. While surgical precision and individual pathology are factors that are beyond careful control, improvements can be made in the areas of clinical practice and the engineering of binaural speech processors. These improvements should be grounded in a good understanding of the sensitivities of bilateral CI patients to the acoustic binaural cues that are important to normal hearing listeners for sound localization and speech in noise understanding. To this end, we review the current state-of-the-art in the understanding of the sensitivities of bilateral CI patients to binaural cues in electric hearing, and highlight the important issues and challenges as they relate to clinical practice and the development of new binaural processing strategies. PMID:25193553
Steininger, Stefanie C.; Liu, Xinyang; Gietl, Anton; Wyss, Michael; Schreiner, Simon; Gruber, Esmeralda; Treyer, Valerie; Kälin, Andrea; Leh, Sandra; Buck, Alfred; Nitsch, Roger M.; Prüssmann, Klaas P.; Hock, Christoph; Unschuld, Paul G.
2014-01-01
Background: Deposition of cortical amyloid beta (Aβ) is a correlate of aging and a risk factor for Alzheimer disease (AD). While several higher order cognitive processes involve functional interactions between cortex and cerebellum, this study aims to investigate effects of cortical Aβ deposition on coupling within the cerebro-cerebellar system. Methods: We included 15 healthy elderly subjects with normal cognitive performance as assessed by neuropsychological testing. Cortical Aβ was quantified using (11)carbon-labeled Pittsburgh compound B positron-emission-tomography late frame signals. Volumes of brain structures were assessed by applying an automated parcelation algorithm to three dimensional magnetization-prepared rapid gradient-echo T1-weighted images. Basal functional network activity within the cerebro-cerebellar system was assessed using blood-oxygen-level dependent resting state functional magnetic resonance imaging at the high field strength of 7 T for measuring coupling between cerebellar seeds and cerebral gray matter. A bivariate regression approach was applied for identification of brain regions with significant effects of individual cortical Aβ load on coupling. Results: Consistent with earlier reports, a significant degree of positive and negative coupling could be observed between cerebellar seeds and cerebral voxels. Significant positive effects of cortical Aβ load on cerebro-cerebellar coupling resulted for cerebral brain regions located in inferior temporal lobe, prefrontal cortex, hippocampus, parahippocampal gyrus, and thalamus. Conclusion: Our findings indicate that brain amyloidosis in cognitively normal elderly subjects is associated with decreased network efficiency within the cerebro-cerebellar system. While the identified cerebral regions are consistent with established patterns of increased sensitivity for Aβ-associated neurodegeneration, additional studies are needed to elucidate the relationship between dysfunction of the cerebro-cerebellar system and risk for AD. PMID:24672483
Steininger, Stefanie C; Liu, Xinyang; Gietl, Anton; Wyss, Michael; Schreiner, Simon; Gruber, Esmeralda; Treyer, Valerie; Kälin, Andrea; Leh, Sandra; Buck, Alfred; Nitsch, Roger M; Prüssmann, Klaas P; Hock, Christoph; Unschuld, Paul G
2014-01-01
Deposition of cortical amyloid beta (Aβ) is a correlate of aging and a risk factor for Alzheimer disease (AD). While several higher order cognitive processes involve functional interactions between cortex and cerebellum, this study aims to investigate effects of cortical Aβ deposition on coupling within the cerebro-cerebellar system. We included 15 healthy elderly subjects with normal cognitive performance as assessed by neuropsychological testing. Cortical Aβ was quantified using (11)carbon-labeled Pittsburgh compound B positron-emission-tomography late frame signals. Volumes of brain structures were assessed by applying an automated parcelation algorithm to three dimensional magnetization-prepared rapid gradient-echo T1-weighted images. Basal functional network activity within the cerebro-cerebellar system was assessed using blood-oxygen-level dependent resting state functional magnetic resonance imaging at the high field strength of 7 T for measuring coupling between cerebellar seeds and cerebral gray matter. A bivariate regression approach was applied for identification of brain regions with significant effects of individual cortical Aβ load on coupling. Consistent with earlier reports, a significant degree of positive and negative coupling could be observed between cerebellar seeds and cerebral voxels. Significant positive effects of cortical Aβ load on cerebro-cerebellar coupling resulted for cerebral brain regions located in inferior temporal lobe, prefrontal cortex, hippocampus, parahippocampal gyrus, and thalamus. Our findings indicate that brain amyloidosis in cognitively normal elderly subjects is associated with decreased network efficiency within the cerebro-cerebellar system. While the identified cerebral regions are consistent with established patterns of increased sensitivity for Aβ-associated neurodegeneration, additional studies are needed to elucidate the relationship between dysfunction of the cerebro-cerebellar system and risk for AD.
Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness
NASA Astrophysics Data System (ADS)
Feng, Albert
2002-05-01
Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.
ERIC Educational Resources Information Center
Bunta, Ferenc; Goodin-Mayeda, C. Elizabeth; Procter, Amanda; Hernandez, Arturo
2016-01-01
Purpose: This study focuses on stop voicing differentiation in bilingual children with normal hearing (NH) and their bilingual peers with hearing loss who use cochlear implants (CIs). Method: Twenty-two bilingual children participated in our study (11 with NH, "M" age = 5;1 [years;months], and 11 with CIs, "M" hearing age =…
False Belief Development in Children Who Are Hard of Hearing Compared with Peers with Normal Hearing
ERIC Educational Resources Information Center
Walker, Elizabeth A.; Ambrose, Sophie E.; Oleson, Jacob; Moeller, Mary Pat
2017-01-01
Purpose: This study investigates false belief (FB) understanding in children who are hard of hearing (CHH) compared with children with normal hearing (CNH) at ages 5 and 6 years and at 2nd grade. Research with this population has theoretical significance, given that the early auditory-linguistic experiences of CHH are less restricted compared with…
ERIC Educational Resources Information Center
Morgan, Shae D.; Ferguson, Sarah Hargus
2017-01-01
Purpose: In this study, we investigated the emotion perceived by young listeners with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners) when listening to speech produced conversationally or in a clear speaking style. Method: The first experiment included 18 YNH listeners, and the second included 10 additional…
How Hearing Loss and Age Affect Emotional Responses to Nonspeech Sounds
ERIC Educational Resources Information Center
Picou, Erin M.
2016-01-01
Purpose: The purpose of this study was to evaluate the effects of hearing loss and age on subjective ratings of emotional valence and arousal in response to nonspeech sounds. Method: Three groups of adults participated: 20 younger listeners with normal hearing (M = 24.8 years), 20 older listeners with normal hearing (M = 55.8 years), and 20 older…
Yu, Luodi; Rao, Aparna; Zhang, Yang; Burton, Philip C.; Rishiq, Dania; Abrams, Harvey
2017-01-01
Although audiovisual (AV) training has been shown to improve overall speech perception in hearing-impaired listeners, there has been a lack of direct brain imaging data to help elucidate the neural networks and neural plasticity associated with hearing aid (HA) use and auditory training targeting speechreading. For this purpose, the current clinical case study reports functional magnetic resonance imaging (fMRI) data from two hearing-impaired patients who were first-time HA users. During the study period, both patients used HAs for 8 weeks; only one received a training program named ReadMyQuipsTM (RMQ) targeting speechreading during the second half of the study period for 4 weeks. Identical fMRI tests were administered at pre-fitting and at the end of the 8 weeks. Regions of interest (ROI) including auditory cortex and visual cortex for uni-sensory processing, and superior temporal sulcus (STS) for AV integration, were identified for each person through independent functional localizer task. The results showed experience-dependent changes involving ROIs of auditory cortex, STS and functional connectivity between uni-sensory ROIs and STS from pretest to posttest in both cases. These data provide initial evidence for the malleable experience-driven cortical functionality for AV speech perception in elderly hearing-impaired people and call for further studies with a much larger subject sample and systematic control to fill in the knowledge gap to understand brain plasticity associated with auditory rehabilitation in the aging population. PMID:28270763
Yu, Luodi; Rao, Aparna; Zhang, Yang; Burton, Philip C; Rishiq, Dania; Abrams, Harvey
2017-01-01
Although audiovisual (AV) training has been shown to improve overall speech perception in hearing-impaired listeners, there has been a lack of direct brain imaging data to help elucidate the neural networks and neural plasticity associated with hearing aid (HA) use and auditory training targeting speechreading. For this purpose, the current clinical case study reports functional magnetic resonance imaging (fMRI) data from two hearing-impaired patients who were first-time HA users. During the study period, both patients used HAs for 8 weeks; only one received a training program named ReadMyQuips TM (RMQ) targeting speechreading during the second half of the study period for 4 weeks. Identical fMRI tests were administered at pre-fitting and at the end of the 8 weeks. Regions of interest (ROI) including auditory cortex and visual cortex for uni-sensory processing, and superior temporal sulcus (STS) for AV integration, were identified for each person through independent functional localizer task. The results showed experience-dependent changes involving ROIs of auditory cortex, STS and functional connectivity between uni-sensory ROIs and STS from pretest to posttest in both cases. These data provide initial evidence for the malleable experience-driven cortical functionality for AV speech perception in elderly hearing-impaired people and call for further studies with a much larger subject sample and systematic control to fill in the knowledge gap to understand brain plasticity associated with auditory rehabilitation in the aging population.
Luo, Xin; Fu, Qian-Jie; Galvin, John J.
2007-01-01
The present study investigated the ability of normal-hearing listeners and cochlear implant users to recognize vocal emotions. Sentences were produced by 1 male and 1 female talker according to 5 target emotions: angry, anxious, happy, sad, and neutral. Overall amplitude differences between the stimuli were either preserved or normalized. In experiment 1, vocal emotion recognition was measured in normal-hearing and cochlear implant listeners; cochlear implant subjects were tested using their clinically assigned processors. When overall amplitude cues were preserved, normal-hearing listeners achieved near-perfect performance, whereas listeners with cochlear implant recognized less than half of the target emotions. Removing the overall amplitude cues significantly worsened mean normal-hearing and cochlear implant performance. In experiment 2, vocal emotion recognition was measured in listeners with cochlear implant as a function of the number of channels (from 1 to 8) and envelope filter cutoff frequency (50 vs 400 Hz) in experimental speech processors. In experiment 3, vocal emotion recognition was measured in normal-hearing listeners as a function of the number of channels (from 1 to 16) and envelope filter cutoff frequency (50 vs 500 Hz) in acoustic cochlear implant simulations. Results from experiments 2 and 3 showed that both cochlear implant and normal-hearing performance significantly improved as the number of channels or the envelope filter cutoff frequency was increased. The results suggest that spectral, temporal, and overall amplitude cues each contribute to vocal emotion recognition. The poorer cochlear implant performance is most likely attributable to the lack of salient pitch cues and the limited functional spectral resolution. PMID:18003871
Hussain, Zahra; Svensson, Carl-Magnus; Besle, Julien; Webb, Ben S.; Barrett, Brendan T.; McGraw, Paul V.
2015-01-01
We describe a method for deriving the linear cortical magnification factor from positional error across the visual field. We compared magnification obtained from this method between normally sighted individuals and amblyopic individuals, who receive atypical visual input during development. The cortical magnification factor was derived for each subject from positional error at 32 locations in the visual field, using an established model of conformal mapping between retinal and cortical coordinates. Magnification of the normally sighted group matched estimates from previous physiological and neuroimaging studies in humans, confirming the validity of the approach. The estimate of magnification for the amblyopic group was significantly lower than the normal group: by 4.4 mm deg−1 at 1° eccentricity, assuming a constant scaling factor for both groups. These estimates, if correct, suggest a role for early visual experience in establishing retinotopic mapping in cortex. We discuss the implications of altered cortical magnification for cortical size, and consider other neural changes that may account for the amblyopic results. PMID:25761341
Perspectives on the Pure-Tone Audiogram.
Musiek, Frank E; Shinn, Jennifer; Chermak, Gail D; Bamiou, Doris-Eva
The pure-tone audiogram, though fundamental to audiology, presents limitations, especially in the case of central auditory involvement. Advances in auditory neuroscience underscore the considerably larger role of the central auditory nervous system (CANS) in hearing and related disorders. Given the availability of behavioral audiological tests and electrophysiological procedures that can provide better insights as to the function of the various components of the auditory system, this perspective piece reviews the limitations of the pure-tone audiogram and notes some of the advantages of other tests and procedures used in tandem with the pure-tone threshold measurement. To review and synthesize the literature regarding the utility and limitations of the pure-tone audiogram in determining dysfunction of peripheral sensory and neural systems, as well as the CANS, and to identify other tests and procedures that can supplement pure-tone thresholds and provide enhanced diagnostic insight, especially regarding problems of the central auditory system. A systematic review and synthesis of the literature. The authors independently searched and reviewed literature (journal articles, book chapters) pertaining to the limitations of the pure-tone audiogram. The pure-tone audiogram provides information as to hearing sensitivity across a selected frequency range. Normal or near-normal pure-tone thresholds sometimes are observed despite cochlear damage. There are a surprising number of patients with acoustic neuromas who have essentially normal pure-tone thresholds. In cases of central deafness, depressed pure-tone thresholds may not accurately reflect the status of the peripheral auditory system. Listening difficulties are seen in the presence of normal pure-tone thresholds. Suprathreshold procedures and a variety of other tests can provide information regarding other and often more central functions of the auditory system. The audiogram is a primary tool for determining type, degree, and configuration of hearing loss; however, it provides the clinician with information regarding only hearing sensitivity, and no information about central auditory processing or the auditory processing of real-world signals (i.e., speech, music). The pure-tone audiogram offers limited insight into functional hearing and should be viewed only as a test of hearing sensitivity. Given the limitations of the pure-tone audiogram, a brief overview is provided of available behavioral tests and electrophysiological procedures that are sensitive to the function and integrity of the central auditory system, which provide better diagnostic and rehabilitative information to the clinician and patient. American Academy of Audiology
Reference-Free Assessment of Speech Intelligibility Using Bispectrum of an Auditory Neurogram.
Hossain, Mohammad E; Jassim, Wissam A; Zilany, Muhammad S A
2016-01-01
Sensorineural hearing loss occurs due to damage to the inner and outer hair cells of the peripheral auditory system. Hearing loss can cause decreases in audibility, dynamic range, frequency and temporal resolution of the auditory system, and all of these effects are known to affect speech intelligibility. In this study, a new reference-free speech intelligibility metric is proposed using 2-D neurograms constructed from the output of a computational model of the auditory periphery. The responses of the auditory-nerve fibers with a wide range of characteristic frequencies were simulated to construct neurograms. The features of the neurograms were extracted using third-order statistics referred to as bispectrum. The phase coupling of neurogram bispectrum provides a unique insight for the presence (or deficit) of supra-threshold nonlinearities beyond audibility for listeners with normal hearing (or hearing loss). The speech intelligibility scores predicted by the proposed method were compared to the behavioral scores for listeners with normal hearing and hearing loss both in quiet and under noisy background conditions. The results were also compared to the performance of some existing methods. The predicted results showed a good fit with a small error suggesting that the subjective scores can be estimated reliably using the proposed neural-response-based metric. The proposed metric also had a wide dynamic range, and the predicted scores were well-separated as a function of hearing loss. The proposed metric successfully captures the effects of hearing loss and supra-threshold nonlinearities on speech intelligibility. This metric could be applied to evaluate the performance of various speech-processing algorithms designed for hearing aids and cochlear implants.
Reference-Free Assessment of Speech Intelligibility Using Bispectrum of an Auditory Neurogram
Hossain, Mohammad E.; Jassim, Wissam A.; Zilany, Muhammad S. A.
2016-01-01
Sensorineural hearing loss occurs due to damage to the inner and outer hair cells of the peripheral auditory system. Hearing loss can cause decreases in audibility, dynamic range, frequency and temporal resolution of the auditory system, and all of these effects are known to affect speech intelligibility. In this study, a new reference-free speech intelligibility metric is proposed using 2-D neurograms constructed from the output of a computational model of the auditory periphery. The responses of the auditory-nerve fibers with a wide range of characteristic frequencies were simulated to construct neurograms. The features of the neurograms were extracted using third-order statistics referred to as bispectrum. The phase coupling of neurogram bispectrum provides a unique insight for the presence (or deficit) of supra-threshold nonlinearities beyond audibility for listeners with normal hearing (or hearing loss). The speech intelligibility scores predicted by the proposed method were compared to the behavioral scores for listeners with normal hearing and hearing loss both in quiet and under noisy background conditions. The results were also compared to the performance of some existing methods. The predicted results showed a good fit with a small error suggesting that the subjective scores can be estimated reliably using the proposed neural-response-based metric. The proposed metric also had a wide dynamic range, and the predicted scores were well-separated as a function of hearing loss. The proposed metric successfully captures the effects of hearing loss and supra-threshold nonlinearities on speech intelligibility. This metric could be applied to evaluate the performance of various speech-processing algorithms designed for hearing aids and cochlear implants. PMID:26967160
Zeitooni, Mehrnaz; Mäki-Torkko, Elina; Stenfelt, Stefan
The purpose of this study is to evaluate binaural hearing ability in adults with normal hearing when bone conduction (BC) stimulation is bilaterally applied at the bone conduction hearing aid (BCHA) implant position as well as at the audiometric position on the mastoid. The results with BC stimulation are compared with bilateral air conduction (AC) stimulation through earphones. Binaural hearing ability is investigated with tests of spatial release from masking and binaural intelligibility level difference using sentence material, binaural masking level difference with tonal chirp stimulation, and precedence effect using noise stimulus. In all tests, results with bilateral BC stimulation at the BCHA position illustrate an ability to extract binaural cues similar to BC stimulation at the mastoid position. The binaural benefit is overall greater with AC stimulation than BC stimulation at both positions. The binaural benefit for BC stimulation at the mastoid and BCHA position is approximately half in terms of decibels compared with AC stimulation in the speech based tests (spatial release from masking and binaural intelligibility level difference). For binaural masking level difference, the binaural benefit for the two BC positions with chirp signal phase inversion is approximately twice the benefit with inverted phase of the noise. The precedence effect results with BC stimulation at the mastoid and BCHA position are similar for low frequency noise stimulation but differ with high-frequency noise stimulation. The results confirm that binaural hearing processing with bilateral BC stimulation at the mastoid position is also present at the BCHA implant position. This indicates the ability for binaural hearing in patients with good cochlear function when using bilateral BCHAs.
Visual Field Abnormalities among Adolescent Boys with Hearing Impairments
KHORRAMI-NEJAD, Masoud; HERAVIAN, Javad; SEDAGHAT, Mohamad-Reza; MOMENI-MOGHADAM, Hamed; SOBHANI-RAD, Davood; ASKARIZADEH, Farshad
2016-01-01
The aim of this study was to compare the visual field (VF) categorizations (based on the severity of VF defects) between adolescent boys with hearing impairments and those with normal hearing. This cross-sectional study involved the evaluation of the VF of 64 adolescent boys with hearing impairments and 68 age-matched boys with normal hearing at high schools in Tehran, Iran, in 2013. All subjects had an intelligence quotient (IQ) > 70. The hearing impairments were classified based on severity and time of onset. Participants underwent a complete eye examination, and the VFs were investigated using automated perimetry with a Humphrey Visual Field Analyzer. This device was used to determine their foveal threshold (FT), mean deviation (MD), and Glaucoma Hemifield Test (GHT) results. Most (50%) of the boys with hearing impairments had profound hearing impairments. There was no significant between-group difference in age (P = 0.49) or IQ (P = 0.13). There was no between-group difference in the corrected distance visual acuity (P = 0.183). According to the FT, MD, and GHT results, the percentage of boys with abnormal VFs in the hearing impairment group was significantly greater than that in the normal hearing group: 40.6% vs. 22.1%, 59.4% vs. 19.1%, and 31.2% vs. 8.8%, respectively (P < 0.0001). The mean MD in the hearing impairment group was significantly worse than that in the normal hearing group (-0.79 ± 2.04 and -4.61 ± 6.52 dB, respectively, P < 0.0001), and the mean FT was also significantly worse (38.97 ± 1.66 vs. 35.30 ± 1.43 dB, respectively, P <0.0001). Moreover, there was a significant between-group difference in the GHT results (P < 0.0001). Thus, there were higher percentages of boys with VF abnormalities and higher mean MD, FT, and GHT results among those with hearing impairments compared to those with normal hearing. These findings emphasize the need for detailed VF assessments for patients with hearing impairments. PMID:28293650
Sheft, Stanley; Shafiro, Valeriy; Lorenzi, Christian; McMullen, Rachel; Farrell, Caitlin
2012-01-01
Objective The frequency modulation (FM) of speech can convey linguistic information and also enhance speech-stream coherence and segmentation. Using a clinically oriented approach, the purpose of the present study was to examine the effects of age and hearing loss on the ability to discriminate between stochastic patterns of low-rate FM and determine whether difficulties in speech perception experienced by older listeners relate to a deficit in this ability. Design Data were collected from 18 normal-hearing young adults, and 18 participants who were at least 60 years old, nine normal-hearing and nine with a mild-to-moderate sensorineural hearing loss. Using stochastic frequency modulators derived from 5-Hz lowpass noise applied to a 1-kHz carrier, discrimination thresholds were measured in terms of frequency excursion (ΔF) both in quiet and with a speech-babble masker present, stimulus duration, and signal-to-noise ratio (SNRFM) in the presence of a speech-babble masker. Speech perception ability was evaluated using Quick Speech-in-Noise (QuickSIN) sentences in four-talker babble. Results Results showed a significant effect of age, but not of hearing loss among the older listeners, for FM discrimination conditions with masking present (ΔF and SNRFM). The effect of age was not significant for the FM measures based on stimulus duration. ΔF and SNRFM were also the two conditions for which performance was significantly correlated with listener age when controlling for effect of hearing loss as measured by pure-tone average. With respect to speech-in-noise ability, results from the SNRFM condition were significantly correlated with QuickSIN performance. Conclusions Results indicate that aging is associated with reduced ability to discriminate moderate-duration patterns of low-rate stochastic FM. Furthermore, the relationship between QuickSIN performance and the SNRFM thresholds suggests that the difficulty experienced by older listeners with speech-in-noise processing may in part relate to diminished ability to process slower fine-structure modulation at low sensation levels. Results thus suggest that clinical consideration of stochastic FM discrimination measures may offer a fuller picture of auditory processing abilities. PMID:22790319
Borghammer, Per; Chakravarty, Mallar; Jonsdottir, Kristjana Yr; Sato, Noriko; Matsuda, Hiroshi; Ito, Kengo; Arahata, Yutaka; Kato, Takashi; Gjedde, Albert
2010-05-01
Recent cerebral blood flow (CBF) and glucose consumption (CMRglc) studies of Parkinson's disease (PD) revealed conflicting results. Using simulated data, we previously demonstrated that the often-reported subcortical hypermetabolism in PD could be explained as an artifact of biased global mean (GM) normalization, and that low-magnitude, extensive cortical hypometabolism is best detected by alternative data-driven normalization methods. Thus, we hypothesized that PD is characterized by extensive cortical hypometabolism but no concurrent widespread subcortical hypermetabolism and tested it on three independent samples of PD patients. We compared SPECT CBF images of 32 early-stage and 33 late-stage PD patients with that of 60 matched controls. We also compared PET FDG images from 23 late-stage PD patients with that of 13 controls. Three different normalization methods were compared: (1) GM normalization, (2) cerebellum normalization, (3) reference cluster normalization (Yakushev et al.). We employed standard voxel-based statistics (fMRIstat) and principal component analysis (SSM). Additionally, we performed a meta-analysis of all quantitative CBF and CMRglc studies in the literature to investigate whether the global mean (GM) values in PD are decreased. Voxel-based analysis with GM normalization and the SSM method performed similarly, i.e., both detected decreases in small cortical clusters and concomitant increases in extensive subcortical regions. Cerebellum normalization revealed more widespread cortical decreases but no subcortical increase. In all comparisons, the Yakushev method detected nearly identical patterns of very extensive cortical hypometabolism. Lastly, the meta-analyses demonstrated that global CBF and CMRglc values are decreased in PD. Based on the results, we conclude that PD most likely has widespread cortical hypometabolism, even at early disease stages. In contrast, extensive subcortical hypermetabolism is probably not a feature of PD.
Binaural Interference and the Effects of Age and Hearing Loss.
Mussoi, Bruna S S; Bentler, Ruth A
2017-01-01
The existence of binaural interference, defined here as poorer speech recognition with both ears than with the better ear alone, is well documented. Studies have suggested that its prevalence may be higher in the elderly population. However, no study to date has explored binaural interference in groups of younger and older adults in conditions that favor binaural processing (i.e., in spatially separated noise). Also, the effects of hearing loss have not been studied. To examine binaural interference through speech perception tests, in groups of younger adults with normal hearing, older adults with normal hearing for their age, and older adults with hearing loss. A cross-sectional study. Thirty-three participants with symmetric thresholds were recruited from the University of Iowa community. Participants were grouped as follows: younger with normal hearing (18-28 yr, n = 12), older with normal hearing for their age (73-87 yr, n = 9), and older with hearing loss (78-94 yr, n = 12). Prior noise exposure was ruled out. The Connected Speech Test (CST) and Hearing in Noise Test (HINT) were administered to all participants bilaterally, and to each ear separately. Test materials were presented in the sound field with speech at 0° azimuth and the noise at 180°. The Dichotic Digits Test (DDT) was administered to all participants through earphones. Hearing aids were not used during testing. Group results were compared with repeated measures and one-way analysis of variances, as appropriate. Within-subject analyses using pre-established critical differences for each test were also performed. The HINT revealed no effect of condition (individual ear versus bilateral presentation) using group analysis, although within-subject analysis showed that 27% of the participants had binaural interference (18% had binaural advantage). On the CST, there was significant binaural advantage across all groups with group data analysis, as well as for 12% of the participants at each of the two signal-to-babble ratios (SBRs) tested. One participant had binaural interference at each SBR. Finally, on the DDT, a significant right-ear advantage was found with group data, and for at least some participants. Regarding age effects, more participants in the pooled elderly groups had binaural interference (33.3%) than in the younger group (16.7%), on the HINT. The presence of hearing loss yielded overall lower scores, but none of the comparisons between bilateral and unilateral performance were affected by hearing loss. Results of within-subject analyses on the HINT agree with previous findings of binaural interference in ≥17% of listeners. Across all groups, a significant right-ear advantage was also seen on the DDT. HINT results support the notion that the prevalence of binaural interference is likely higher in the elderly population. Hearing loss, however, did not affect the differences between bilateral and better unilateral scores. The possibility of binaural interference should be considered when fitting hearing aids to listeners with symmetric hearing loss. Comparing bilateral to unilateral (unaided) performance on tests such as the HINT may provide the clinician with objective data to support subjective preference for one hearing aid as opposed to two. American Academy of Audiology
ERIC Educational Resources Information Center
Ferguson, Sarah Hargus; Morgan, Shae D.
2018-01-01
Purpose: The purpose of this study is to examine talker differences for subjectively rated speech clarity in clear versus conversational speech, to determine whether ratings differ for young adults with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners), and to explore effects of certain talker characteristics…
A physiological and behavioral system for hearing restoration with cochlear implants
King, Julia; Shehu, Ina; Roland, J. Thomas; Svirsky, Mario A.
2016-01-01
Cochlear implants are neuroprosthetic devices that provide hearing to deaf patients, although outcomes are highly variable even with prolonged training and use. The central auditory system must process cochlear implant signals, but it is unclear how neural circuits adapt—or fail to adapt—to such inputs. The knowledge of these mechanisms is required for development of next-generation neuroprosthetics that interface with existing neural circuits and enable synaptic plasticity to improve perceptual outcomes. Here, we describe a new system for cochlear implant insertion, stimulation, and behavioral training in rats. Animals were first ensured to have significant hearing loss via physiological and behavioral criteria. We developed a surgical approach for multichannel (2- or 8-channel) array insertion, comparable with implantation procedures and depth in humans. Peripheral and cortical responses to stimulation were used to program the implant objectively. Animals fitted with implants learned to use them for an auditory-dependent task that assesses frequency detection and recognition in a background of environmentally and self-generated noise and ceased responding appropriately to sounds when the implant was temporarily inactivated. This physiologically calibrated and behaviorally validated system provides a powerful opportunity to study the neural basis of neuroprosthetic device use and plasticity. PMID:27281743
Xia, Jing; Nooraei, Nazanin; Kalluri, Sridhar; Edwards, Brent
2015-04-01
This study investigated whether spatial separation between talkers helps reduce cognitive processing load, and how hearing impairment interacts with the cognitive load of individuals listening in multi-talker environments. A dual-task paradigm was used in which performance on a secondary task (visual tracking) served as a measure of the cognitive load imposed by a speech recognition task. Visual tracking performance was measured under four conditions in which the target and the interferers were distinguished by (1) gender and spatial location, (2) gender only, (3) spatial location only, and (4) neither gender nor spatial location. Results showed that when gender cues were available, a 15° spatial separation between talkers reduced the cognitive load of listening even though it did not provide further improvement in speech recognition (Experiment I). Compared to normal-hearing listeners, large individual variability in spatial release of cognitive load was observed among hearing-impaired listeners. Cognitive load was lower when talkers were spatially separated by 60° than when talkers were of different genders, even though speech recognition was comparable in these two conditions (Experiment II). These results suggest that a measure of cognitive load might provide valuable insight into the benefit of spatial cues in multi-talker environments.
Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker
2016-06-01
Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13 two-digit numbers, with alternating male and female talkers. Lists were presented in quiet as well as in stationary and speech-like noise at a signal-to-noise ratio giving approximately 90% intelligibility. Amplification compensated for loss of audibility. Seeing the talker's face improved free recall performance for the younger but not the older group. Poorer performance in background noise was contingent on individual differences in working memory capacity. The effect of seeing the talker's face did not differ in quiet and noise. We have argued that the absence of an effect of seeing the talker's face for older adults with hearing loss may be due to modulation of audiovisual integration mechanisms caused by an interaction between task demands and participant characteristics. In particular, we suggest that executive task demands and interindividual executive skills may play a key role in determining the benefit of seeing the talker's face during a speech-based cognitive task.
Centanni, Tracy M.; Chen, Fuyi; Booker, Anne M.; Engineer, Crystal T.; Sloan, Andrew M.; Rennaker, Robert L.; LoTurco, Joseph J.; Kilgard, Michael P.
2014-01-01
In utero RNAi of the dyslexia-associated gene Kiaa0319 in rats (KIA-) degrades cortical responses to speech sounds and increases trial-by-trial variability in onset latency. We tested the hypothesis that KIA- rats would be impaired at speech sound discrimination. KIA- rats needed twice as much training in quiet conditions to perform at control levels and remained impaired at several speech tasks. Focused training using truncated speech sounds was able to normalize speech discrimination in quiet and background noise conditions. Training also normalized trial-by-trial neural variability and temporal phase locking. Cortical activity from speech trained KIA- rats was sufficient to accurately discriminate between similar consonant sounds. These results provide the first direct evidence that assumed reduced expression of the dyslexia-associated gene KIAA0319 can cause phoneme processing impairments similar to those seen in dyslexia and that intensive behavioral therapy can eliminate these impairments. PMID:24871331
Money, M K; Pippin, G W; Weaver, K E; Kirsch, J P; Webster, D B
1995-07-01
Exogenous administration of GM1 ganglioside to CBA/J mice with a neonatal conductive hearing loss ameliorates the atrophy of spiral ganglion neurons, ventral cochlear nucleus neurons, and ventral cochlear nucleus volume. The present investigation demonstrates the extent of a conductive loss caused by atresia and tests the hypothesis that GM1 ganglioside treatment will ameliorate the conductive hearing loss. Auditory brainstem responses were recorded from four groups of seven mice each: two groups received daily subcutaneous injections of saline (one group had normal hearing; the other had a conductive hearing loss); the other two groups received daily subcutaneous injections of GM1 ganglioside (one group had normal hearing; the other had a conductive hearing loss). In mice with a conductive loss, decreases in hearing sensitivity were greatest at high frequencies. The decreases were determined by comparing mean ABR thresholds of the conductive loss mice with those of normal hearing mice. The conductive hearing loss induced in the mice in this study was similar to that seen in humans with congenital aural atresias. GM1 ganglioside treatment had no significant effect on ABR wave I thresholds or latencies in either group.
Normal-Hearing Listeners’ and Cochlear Implant Users’ Perception of Pitch Cues in Emotional Speech
Fuller, Christina; Gilbers, Dicky; Broersma, Mirjam; Goudbeek, Martijn; Free, Rolien; Başkent, Deniz
2015-01-01
In cochlear implants (CIs), acoustic speech cues, especially for pitch, are delivered in a degraded form. This study’s aim is to assess whether due to degraded pitch cues, normal-hearing listeners and CI users employ different perceptual strategies to recognize vocal emotions, and, if so, how these differ. Voice actors were recorded pronouncing a nonce word in four different emotions: anger, sadness, joy, and relief. These recordings’ pitch cues were phonetically analyzed. The recordings were used to test 20 normal-hearing listeners’ and 20 CI users’ emotion recognition. In congruence with previous studies, high-arousal emotions had a higher mean pitch, wider pitch range, and more dominant pitches than low-arousal emotions. Regarding pitch, speakers did not differentiate emotions based on valence but on arousal. Normal-hearing listeners outperformed CI users in emotion recognition, even when presented with CI simulated stimuli. However, only normal-hearing listeners recognized one particular actor’s emotions worse than the other actors’. The groups behaved differently when presented with similar input, showing that they had to employ differing strategies. Considering the respective speaker’s deviating pronunciation, it appears that for normal-hearing listeners, mean pitch is a more salient cue than pitch range, whereas CI users are biased toward pitch range cues. PMID:27648210
Hearing in Noise Test Brazil: standardization for young adults with normal hearing.
Sbompato, Andressa Forlevise; Corteletti, Lilian Cassia Bornia Jacob; Moret, Adriane de Lima Mortari; Jacob, Regina Tangerino de Souza
2015-01-01
Individuals with the same ability of speech recognition in quiet can have extremely different results in noisy environments. To standardize speech perception in adults with normal hearing in the free field using the Brazilian Hearing in Noise Test. Contemporary, cross-sectional cohort study. 79 adults with normal hearing and without cognitive impairment participated in the study. Lists of Hearing in Noise Test sentences were randomly in quiet, noise front, noise right, and noise left. There were no significant differences between right and left ears at all frequencies tested (paired t-1 test). Nor were significant differences observed when comparing gender and interaction between these conditions. A difference was observed among the free field positions tested, except in the situations of noise right and noise left. Results of speech perception in adults with normal hearing in the free field during different listening situations in noise indicated poorer performance during the condition with noise and speech in front, i.e., 0°/0°. The values found in the standardization of the Hearing in Noise Test free field can be used as a reference in the development of protocols for tests of speech perception in noise, and for monitoring individuals with hearing impairment. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Verbal Working Memory in Older Adults: The Roles of Phonological Capacities and Processing Speed
ERIC Educational Resources Information Center
Nittrouer, Susan; Lowenstein, Joanna H.; Wucinich, Taylor; Moberly, Aaron C.
2016-01-01
Purpose: This study examined the potential roles of phonological sensitivity and processing speed in age-related declines of verbal working memory. Method: Twenty younger and 25 older adults with age-normal hearing participated. Two measures of verbal working memory were collected: digit span and serial recall of words. Processing speed was…
Regulation of microglial development: a novel role for thyroid hormone.
Lima, F R; Gervais, A; Colin, C; Izembart, M; Neto, V M; Mallat, M
2001-03-15
The postnatal development of rat microglia is marked by an important increase in the number of microglial cells and the growth of their ramified processes. We studied the role of thyroid hormone in microglial development. The distribution and morphology of microglial cells stained with isolectin B4 or monoclonal antibody ED1 were analyzed in cortical and subcortical forebrain regions of developing rats rendered hypothyroid by prenatal and postnatal treatment with methyl-thiouracil. Microglial processes were markedly less abundant in hypothyroid pups than in age-matched normal animals, from postnatal day 4 up to the end of the third postnatal week of life. A delay in process extension and a decrease in the density of microglial cell bodies, as shown by cell counts in the developing cingulate cortex of normal and hypothyroid animals, were responsible for these differences. Conversely, neonatal rat hyperthyroidism, induced by daily injections of 3,5,3'-triiodothyronine (T3), accelerated the extension of microglial processes and increased the density of cortical microglial cell bodies above physiological levels during the first postnatal week of life. Reverse transcription-PCR and immunological analyses indicated that cultured cortical ameboid microglial cells expressed the alpha1 and beta1 isoforms of nuclear thyroid hormone receptors. Consistent with the trophic and morphogenetic effects of thyroid hormone observed in situ, T3 favored the survival of cultured purified microglial cells and the growth of their processes. These results demonstrate that thyroid hormone promotes the growth and morphological differentiation of microglia during development.
Effects of education on aging-related cortical thinning among cognitively normal individuals.
Kim, Jun Pyo; Seo, Sang Won; Shin, Hee Young; Ye, Byoung Seok; Yang, Jin-Ju; Kim, Changsoo; Kang, Mira; Jeon, Seun; Kim, Hee Jin; Cho, Hanna; Kim, Jung-Hyun; Lee, Jong-Min; Kim, Sung Tae; Na, Duk L; Guallar, Eliseo
2015-09-01
We aimed to investigate the relationship between education and cortical thickness in cognitively normal individuals to determine whether education attenuated the association of advanced aging and cortical thinning. A total of 1,959 participants, in whom education levels were available, were included in the final analysis. Cortical thickness was measured on high-resolution MRIs using a surface-based method. Multiple linear regression analysis was performed for education level and cortical thickness, after controlling for possible confounders. High levels of education were correlated with increased mean cortical thickness throughout the entire cortex (p = 0.003). This association persisted after controlling for vascular risk factors. Statistical maps of cortical thickness showed that the high levels of education were correlated with increased cortical thickness in the bilateral premotor areas, anterior cingulate cortices, perisylvian areas, right superior parietal lobule, left lingual gyrus, and occipital pole. There were also interactive effects of age and education on the mean cortical thickness (p = 0.019). Our findings suggest the protective effect of education on cortical thinning in cognitively normal older individuals, regardless of vascular risk factors. This effect was found only in the older participants, suggesting that the protective effects of education on cortical thickness might be achieved by increased resistance to structural loss from aging rather than by simply providing a fixed advantage in the brain. © 2015 American Academy of Neurology.
ERIC Educational Resources Information Center
Most, Tova; Michaelis, Hilit
2012-01-01
Purpose: This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. Method: A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify…
NASA Technical Reports Server (NTRS)
Weinstein, Leonard M.
1994-01-01
Proposed hearing aid maps spectrum of speech into band of lower frequencies at which ear remains sensitive. By redirecting normal speech frequencies into frequency band from 100 to 1,500 Hz, hearing aid allows people to understand normal conversation, including telephone calls. Principle operation of hearing aid adapted to other uses such as, clearing up noisy telephone or radio communication. In addition, loud-speakers more easily understood in presence of high background noise.
Irregular Speech Rate Dissociates Auditory Cortical Entrainment, Evoked Responses, and Frontal Alpha
Kayser, Stephanie J.; Ince, Robin A.A.; Gross, Joachim
2015-01-01
The entrainment of slow rhythmic auditory cortical activity to the temporal regularities in speech is considered to be a central mechanism underlying auditory perception. Previous work has shown that entrainment is reduced when the quality of the acoustic input is degraded, but has also linked rhythmic activity at similar time scales to the encoding of temporal expectations. To understand these bottom-up and top-down contributions to rhythmic entrainment, we manipulated the temporal predictive structure of speech by parametrically altering the distribution of pauses between syllables or words, thereby rendering the local speech rate irregular while preserving intelligibility and the envelope fluctuations of the acoustic signal. Recording EEG activity in human participants, we found that this manipulation did not alter neural processes reflecting the encoding of individual sound transients, such as evoked potentials. However, the manipulation significantly reduced the fidelity of auditory delta (but not theta) band entrainment to the speech envelope. It also reduced left frontal alpha power and this alpha reduction was predictive of the reduced delta entrainment across participants. Our results show that rhythmic auditory entrainment in delta and theta bands reflect functionally distinct processes. Furthermore, they reveal that delta entrainment is under top-down control and likely reflects prefrontal processes that are sensitive to acoustical regularities rather than the bottom-up encoding of acoustic features. SIGNIFICANCE STATEMENT The entrainment of rhythmic auditory cortical activity to the speech envelope is considered to be critical for hearing. Previous work has proposed divergent views in which entrainment reflects either early evoked responses related to sound encoding or high-level processes related to expectation or cognitive selection. Using a manipulation of speech rate, we dissociated auditory entrainment at different time scales. Specifically, our results suggest that delta entrainment is controlled by frontal alpha mechanisms and thus support the notion that rhythmic auditory cortical entrainment is shaped by top-down mechanisms. PMID:26538641
Kant, Anjali R; Banik, Arun A
2017-09-01
The present study aims to use the model-based test Lexical Neighborhood Test (LNT), to assess speech recognition performance in early and late implanted hearing impaired children with normal and malformed cochlea. The LNT was administered to 46 children with congenital (prelingual) bilateral severe-profound sensorineural hearing loss, using Nucleus 24 cochlear implant. The children were grouped into Group 1-(early implantees with normal cochlea-EI); n = 15, 31/2-61/2 years of age; mean age at implantation-3½ years. Group 2-(late implantees with normal cochlea-LI); n = 15, 6-12 years of age; mean age at implantation-5 years. Group 3-(early implantees with malformed cochlea-EIMC); n = 9; 4.9-10.6 years of age; mean age at implantation-3.10 years. Group 4-(late implantees with malformed cochlea-LIMC); n = 7; 7-12.6 years of age; mean age at implantation-6.3 years. The following were the malformations: dysplastic cochlea, common cavity, Mondini's, incomplete partition-1 and 2 (IP-1 and 2), enlarged IAC. The children were instructed to repeat the words on hearing them. Means of the word and phoneme scores were computed. The LNT can also be used to assess speech recognition performance of hearing impaired children with malformed cochlea. When both easy and hard lists of LNT are considered, although, late implantees (with or without normal cochlea), have achieved higher word scores than early implantees, the differences are not statistically significant. Using LNT for assessing speech recognition enables a quantitative as well as descriptive report of phonological processes used by the children.
Mackersie, Carol L.; Dewey, James; Guthrie, Lesli A.
2011-01-01
The purpose was to determine the effect of hearing loss on the ability to separate competing talkers using talker differences in fundamental frequency (F0) and apparent vocal-tract length (VTL). Performance of 13 adults with hearing loss and 6 adults with normal hearing was measured using the Coordinate Response Measure. For listeners with hearing loss, the speech was amplified and filtered according to the NAL-RP hearing aid prescription. Target-to-competition ratios varied from 0 to 9 dB. The target sentence was randomly assigned to the higher or lower values of F0 or VTL on each trial. Performance improved for F0 differences up to 9 and 6 semitones for people with normal hearing and hearing loss, respectively, but only when the target talker had the higher F0. Recognition for the lower F0 target improved when trial-to-trial uncertainty was removed (9-semitone condition). Scores improved with increasing differences in VTL for the normal-hearing group. On average, hearing-impaired listeners did not benefit from VTL cues, but substantial inter-subject variability was observed. The amount of benefit from VTL cues was related to the average hearing loss in the 1–3-kHz region when the target talker had the shorter VTL. PMID:21877813
Horn, David L; Pisoni, David B; Miyamoto, Richard T
2006-08-01
The objective of this study was to assess relations between fine and gross motor development and spoken language processing skills in pediatric cochlear implant users. The authors conducted a retrospective analysis of longitudinal data. Prelingually deaf children who received a cochlear implant before age 5 and had no known developmental delay or cognitive impairment were included in the study. Fine and gross motor development were assessed before implantation using the Vineland Adaptive Behavioral Scales, a standardized parental report of adaptive behavior. Fine and gross motor scores reflected a given child's motor functioning with respect to a normative sample of typically developing, normal-hearing children. Relations between these preimplant scores and postimplant spoken language outcomes were assessed. In general, gross motor scores were found to be positively related to chronologic age, whereas the opposite trend was observed for fine motor scores. Fine motor scores were more strongly correlated with postimplant expressive and receptive language scores than gross motor scores. Our findings suggest a disassociation between fine and gross motor development in prelingually deaf children: fine motor skills, in contrast to gross motor skills, tend to be delayed as the prelingually deaf children get older. These findings provide new knowledge about the links between motor and spoken language development and suggest that auditory deprivation may lead to atypical development of certain motor and language skills that share common cortical processing resources.
Inquiring Ears Want to Know: A Fact Sheet about Your Hearing Test
... track changes in hearing over time • Your hearing threshold levels (the quietest sounds you can hear) are ... Do I have normal hearing? Compare your hearing threshold levels to this scale: -10 – 25 dB 26 – ...
Rekik, Islem; Li, Gang; Lin, Weili; Shen, Dinggang
2016-02-01
Longitudinal neuroimaging analysis methods have remarkably advanced our understanding of early postnatal brain development. However, learning predictive models to trace forth the evolution trajectories of both normal and abnormal cortical shapes remains broadly absent. To fill this critical gap, we pioneered the first prediction model for longitudinal developing cortical surfaces in infants using a spatiotemporal current-based learning framework solely from the baseline cortical surface. In this paper, we detail this prediction model and even further improve its performance by introducing two key variants. First, we use the varifold metric to overcome the limitations of the current metric for surface registration that was used in our preliminary study. We also extend the conventional varifold-based surface registration model for pairwise registration to a spatiotemporal surface regression model. Second, we propose a morphing process of the baseline surface using its topographic attributes such as normal direction and principal curvature sign. Specifically, our method learns from longitudinal data both the geometric (vertices positions) and dynamic (temporal evolution trajectories) features of the infant cortical surface, comprising a training stage and a prediction stage. In the training stage, we use the proposed varifold-based shape regression model to estimate geodesic cortical shape evolution trajectories for each training subject. We then build an empirical mean spatiotemporal surface atlas. In the prediction stage, given an infant, we select the best learnt features from training subjects to simultaneously predict the cortical surface shapes at all later timepoints, based on similarity metrics between this baseline surface and the learnt baseline population average surface atlas. We used a leave-one-out cross validation method to predict the inner cortical surface shape at 3, 6, 9 and 12 months of age from the baseline cortical surface shape at birth. Our method attained a higher prediction accuracy and better captured the spatiotemporal dynamic change of the highly folded cortical surface than the previous proposed prediction method. Copyright © 2015 Elsevier B.V. All rights reserved.
Rothpletz, Ann M.; Wightman, Frederic L.; Kistler, Doris J.
2012-01-01
Background Self-monitoring has been shown to be an essential skill for various aspects of our lives, including our health, education, and interpersonal relationships. Likewise, the ability to monitor one’s speech reception in noisy environments may be a fundamental skill for communication, particularly for those who are often confronted with challenging listening environments, such as students and children with hearing loss. Purpose The purpose of this project was to determine if normal-hearing children, normal-hearing adults, and children with cochlear implants can monitor their listening ability in noise and recognize when they are not able to perceive spoken messages. Research Design Participants were administered an Objective-Subjective listening task in which their subjective judgments of their ability to understand sentences from the Coordinate Response Measure corpus presented in speech spectrum noise were compared to their objective performance on the same task. Study Sample Participants included 41 normal-hearing children, 35 normal-hearing adults, and 10 children with cochlear implants. Data Collection and Analysis On the Objective-Subjective listening task, the level of the masker noise remained constant at 63 dB SPL, while the level of the target sentences varied over a 12 dB range in a block of trials. Psychometric functions, relating proportion correct (Objective condition) and proportion perceived as intelligible (Subjective condition) to target/masker ratio (T/M), were estimated for each participant. Thresholds were defined as the T/M required to produce 51% correct (Objective condition) and 51% perceived as intelligible (Subjective condition). Discrepancy scores between listeners’ threshold estimates in the Objective and Subjective conditions served as an index of self-monitoring ability. In addition, the normal-hearing children were administered tests of cognitive skills and academic achievement, and results from these measures were compared to findings on the Objective-Subjective listening task. Results Nearly half of the children with normal hearing significantly overestimated their listening in noise ability on the Objective-Subjective listening task, compared to less than 9% of the adults. There was a significant correlation between age and results on the Objective-Subjective task, indicating that the younger children in the sample (age 7–12 yr) tended to overestimate their listening ability more than the adolescents and adults. Among the children with cochlear implants, eight of the 10 participants significantly overestimated their listening ability (as compared to 13 of the 24 normal-hearing children in the same age range). We did not find a significant relationship between results on the Objective-Subjective listening task and performance on the given measures of academic achievement or intelligence. Conclusions Findings from this study suggest that many children with normal hearing and children with cochlear implants often fail to recognize when they encounter conditions in which their listening ability is compromised. These results may have practical implications for classroom learning, particularly for children with hearing loss in mainstream settings. PMID:22436118
Sudden onset unilateral sensorineural hearing loss after rabies vaccination.
Okhovat, Saleh; Fox, Richard; Magill, Jennifer; Narula, Antony
2015-12-15
A 33-year-old man developed profound sudden onset right-sided hearing loss with tinnitus and vertigo, within 24 h of pretravel rabies vaccination. There was no history of upper respiratory tract infection, systemic illness, ototoxic medication or trauma, and normal otoscopic examination. Pure tone audiograms (PTA) demonstrated right-sided sensorineural hearing loss (thresholds 90-100 dB) and normal left-sided hearing. MRI internal acoustic meatus, viral serology (hepatitis B, C, HIV and cytomegalovirus) and syphilis screen were normal. Positive Epstein-Barr virus IgG, viral capsid IgG and anticochlear antibodies (anti-HSP-70) were noted. Initial treatment involved a course of high-dose oral prednisolone and acyclovir. Repeat PTAs after 12 days of treatment showed a small improvement in hearing thresholds. Salvage intratympanic steroid injections were attempted but failed to improve hearing further. Sudden onset sensorineural hearing loss (SSNHL) is an uncommon but frightening experience for patients. This is the first report of SSNHL following rabies immunisation in an adult. 2015 BMJ Publishing Group Ltd.
Regulation of cerebral cortical neurogenesis by the Pax6 transcription factor
Manuel, Martine N.; Mi, Da; Mason, John O.; Price, David J.
2015-01-01
Understanding brain development remains a major challenge at the heart of understanding what makes us human. The neocortex, in evolutionary terms the newest part of the cerebral cortex, is the seat of higher cognitive functions. Its normal development requires the production, positioning, and appropriate interconnection of very large numbers of both excitatory and inhibitory neurons. Pax6 is one of a relatively small group of transcription factors that exert high-level control of cortical development, and whose mutation or deletion from developing embryos causes major brain defects and a wide range of neurodevelopmental disorders. Pax6 is very highly conserved between primate and non-primate species, is expressed in a gradient throughout the developing cortex and is essential for normal corticogenesis. Our understanding of Pax6’s functions and the cellular processes that it regulates during mammalian cortical development has significantly advanced in the last decade, owing to the combined application of genetic and biochemical analyses. Here, we review the functional importance of Pax6 in regulating cortical progenitor proliferation, neurogenesis, and formation of cortical layers and highlight important differences between rodents and primates. We also review the pathological effects of PAX6 mutations in human neurodevelopmental disorders. We discuss some aspects of Pax6’s molecular actions including its own complex transcriptional regulation, the distinct molecular functions of its splice variants and some of Pax6’s known direct targets which mediate its actions during cortical development. PMID:25805971
D’Aquila, Laura A.; Desloge, Joseph G.; Braida, Louis D.
2017-01-01
The masking release (MR; i.e., better speech recognition in fluctuating compared with continuous noise backgrounds) that is evident for listeners with normal hearing (NH) is generally reduced or absent for listeners with sensorineural hearing impairment (HI). In this study, a real-time signal-processing technique was developed to improve MR in listeners with HI and offer insight into the mechanisms influencing the size of MR. This technique compares short-term and long-term estimates of energy, increases the level of short-term segments whose energy is below the average energy, and normalizes the overall energy of the processed signal to be equivalent to that of the original long-term estimate. This signal-processing algorithm was used to create two types of energy-equalized (EEQ) signals: EEQ1, which operated on the wideband speech plus noise signal, and EEQ4, which operated independently on each of four bands with equal logarithmic width. Consonant identification was tested in backgrounds of continuous and various types of fluctuating speech-shaped Gaussian noise including those with both regularly and irregularly spaced temporal fluctuations. Listeners with HI achieved similar scores for EEQ and the original (unprocessed) stimuli in continuous-noise backgrounds, while superior performance was obtained for the EEQ signals in fluctuating background noises that had regular temporal gaps but not for those with irregularly spaced fluctuations. Thus, in noise backgrounds with regularly spaced temporal fluctuations, the energy-normalized signals led to larger values of MR and higher intelligibility than obtained with unprocessed signals. PMID:28602128
Kenet, T.; Froemke, R. C.; Schreiner, C. E.; Pessah, I. N.; Merzenich, M. M.
2007-01-01
Noncoplanar polychlorinated biphenyls (PCBs) are widely dispersed in human environment and tissues. Here, an exemplar noncoplanar PCB was fed to rat dams during gestation and throughout three subsequent nursing weeks. Although the hearing sensitivity and brainstem auditory responses of pups were normal, exposure resulted in the abnormal development of the primary auditory cortex (A1). A1 was irregularly shaped and marked by internal nonresponsive zones, its topographic organization was grossly abnormal or reversed in about half of the exposed pups, the balance of neuronal inhibition to excitation for A1 neurons was disturbed, and the critical period plasticity that underlies normal postnatal auditory system development was significantly altered. These findings demonstrate that developmental exposure to this class of environmental contaminant alters cortical development. It is proposed that exposure to noncoplanar PCBs may contribute to common developmental disorders, especially in populations with heritable imbalances in neurotransmitter systems that regulate the ratio of inhibition and excitation in the brain. We conclude that the health implications associated with exposure to noncoplanar PCBs in human populations merit a more careful examination. PMID:17460041
Audiometric Predictions Using SFOAE and Middle-Ear Measurements
Ellison, John C.; Keefe, Douglas H.
2006-01-01
Objective The goals of the study are to determine how well stimulus-frequency otoacoustic emissions (SFOAEs) identify hearing loss, classify hearing loss as mild or moderate-severe, and correlate with pure-tone thresholds in a population of adults with normal middle-ear function. Other goals are to determine if middle-ear function as assessed by wideband acoustic transfer function (ATF) measurements in the ear canal account for the variability in normal thresholds, and if the inclusion of ATFs improves the ability of SFOAEs to identify hearing loss and predict pure-tone thresholds. Design The total suppressed SFOAE signal and its corresponding noise were recorded in 85 ears (22 normal ears and 63 ears with sensorineural hearing loss) at octave frequencies from 0.5 – 8 kHz using a nonlinear residual method. SFOAEs were recorded a second time in three impaired ears to assess repeatability. Ambient-pressure ATFs were obtained in all but one of these 85 ears, and were also obtained from an additional 31 normal-hearing subjects in whom SFOAE data were not obtained. Pure-tone air-and bone-conduction thresholds and 226-Hz tympanograms were obtained on all subjects. Normal tympanometry and the absence of air-bone gaps were used to screen subjects for normal middle-ear function. Clinical decision theory was used to assess the performance of SFOAE and ATF predictors in classifying ears as normal or impaired, and linear regression analysis was used to test the ability of SFOAE and ATF variables to predict the air-conduction audiogram. Results The ability of SFOAEs to classify ears as normal or hearing impaired was significant at all test frequencies. The ability of SFOAEs to classify impaired ears as either mild or moderate-severe was significant at test frequencies from 0.5 to 4 kHz. SFOAEs were present in cases of severe hearing loss. SFOAEs were also significantly correlated with air-conduction thresholds from 0.5 to 8 kHz. The best performance occurred using the SFOAE signal-to-noise ratio (S/N) as the predictor, and the overall best performance was at 2 kHz. The SFOAE S/N measures were repeatable to within 3.5 dB in impaired ears. The ATF measures explained up to 25% of the variance in the normal audiogram; however, ATF measures did not improve SFOAEs predictors of hearing loss except at 4 kHz. Conclusions In common with other OAE types, SFOAEs are capable of identifying the presence of hearing loss. In particular, SFOAEs performed better than distortion-product and click-evoked OAEs in predicting auditory status at 0.5 kHz; SFOAE performance was similar to that of other OAE types at higher frequencies except for a slight performance reduction at 4 kHz. Because SFOAEs were detected in ears with mild to severe cases of hearing loss they may also provide an estimate of the classification of hearing loss. Although SFOAEs were significantly correlated with hearing threshold, they do not appear to have clinical utility in predicting a specific behavioral threshold. Information on middle-ear status as assessed by ATF measures offered minimal improvement in SFOAE predictions of auditory status in a population of normal and impaired ears with normal middle-ear function. However, ATF variables did explain a significant fraction of the variability in the audiograms of normal ears, suggesting that audiometric thresholds in normal ears are partially constrained by middle-ear function as assessed by ATF tests. PMID:16230898
Different categories of living and non-living sound-sources activate distinct cortical networks
Engel, Lauren R.; Frum, Chris; Puce, Aina; Walker, Nathan A.; Lewis, James W.
2009-01-01
With regard to hearing perception, it remains unclear as to whether, or the extent to which, different conceptual categories of real-world sounds and related categorical knowledge are differentially represented in the brain. Semantic knowledge representations are reported to include the major divisions of living versus non-living things, plus more specific categories including animals, tools, biological motion, faces, and places—categories typically defined by their characteristic visual features. Here, we used functional magnetic resonance imaging (fMRI) to identify brain regions showing preferential activity to four categories of action sounds, which included non-vocal human and animal actions (living), plus mechanical and environmental sound-producing actions (non-living). The results showed a striking antero-posterior division in cortical representations for sounds produced by living versus non-living sources. Additionally, there were several significant differences by category, depending on whether the task was category-specific (e.g. human or not) versus non-specific (detect end-of-sound). In general, (1) human-produced sounds yielded robust activation in the bilateral posterior superior temporal sulci independent of task. Task demands modulated activation of left-lateralized fronto-parietal regions, bilateral insular cortices, and subcortical regions previously implicated in observation-execution matching, consistent with “embodied” and mirror-neuron network representations subserving recognition. (2) Animal action sounds preferentially activated the bilateral posterior insulae. (3) Mechanical sounds activated the anterior superior temporal gyri and parahippocampal cortices. (4) Environmental sounds preferentially activated dorsal occipital and medial parietal cortices. Overall, this multi-level dissociation of networks for preferentially representing distinct sound-source categories provides novel support for grounded cognition models that may underlie organizational principles for hearing perception. PMID:19465134
Mackersie, Carol L.; MacPhee, Imola X.; Heldt, Emily W.
2014-01-01
SHORT SUMMARY (précis) Sentence recognition by participants with and without hearing loss was measured in quiet and in babble noise while monitoring two autonomic nervous system measures: heart-rate variability and skin conductance. Heart-rate variability decreased under difficult listening conditions for participants with hearing loss, but not for participants with normal hearing. Skin conductance noise reactivity was greater for those with hearing loss, than for those with normal hearing, but did not vary with the signal-to-noise ratio. Subjective ratings of workload/stress obtained after each listening condition were similar for the two participant groups. PMID:25170782
Das, Barshapriya; Chatterjee, Indranil; Kumar, Suman
2013-01-01
Lack of proper auditory feedback in hearing-impaired subjects results in functional voice disorder. It is directly related to discoordination of intrinsic and extrinsic laryngeal muscles and disturbed contraction and relaxation of antagonistic muscles. A total of twenty children in the age range of 5-10 years were considered for the study. They were divided into two groups: normal hearing children and hearing aid user children. Results showed a significant difference in the vital capacity, maximum sustained phonation, and fast adduction abduction rate having equal variance for normal and hearing aid user children, respectively, but no significant difference was found in the peak flow value with being statistically significant. A reduced vital capacity in hearing aid user children suggests a limited use of the lung volume for speech production. It may be inferred from the study that the hearing aid user children have poor vocal proficiency which is reflected in their voice. The use of voicing component in hearing impaired subjects is seen due to improper auditory feedback. It was found that there was a significant difference in the vital capacity, maximum sustained phonation (MSP), and fast adduction abduction rate and no significant difference in the peak flow.
Evidence of hearing loss in a “normally-hearing” college-student population
Le Prell, C. G.; Hensley, B.N.; Campbell, K. C. M.; Hall, J. W.; Guire, K.
2011-01-01
We report pure-tone hearing threshold findings in 56 college students. All subjects reported normal hearing during telephone interviews, yet not all subjects had normal sensitivity as defined by well-accepted criteria. At one or more test frequencies (0.25–8 kHz), 7% of ears had thresholds ≥25 dB HL and 12% had thresholds ≥20 dB HL. The proportion of ears with abnormal findings decreased when three-frequency pure-tone-averages were used. Low-frequency PTA hearing loss was detected in 2.7% of ears and high-frequency PTA hearing loss was detected in 7.1% of ears; however, there was little evidence for “notched” audiograms. There was a statistically reliable relationship in which personal music player use was correlated with decreased hearing status in male subjects. Routine screening and education regarding hearing loss risk factors are critical as college students do not always self-identify early changes in hearing. Large-scale systematic investigations of college students’ hearing status appear to be warranted; the current sample size was not adequate to precisely measure potential contributions of different sound sources to the elevated thresholds measured in some subjects. PMID:21288064
Reading vocabulary in children with and without hearing loss: the roles of task and word type.
Coppens, Karien M; Tellings, Agnes; Verhoeven, Ludo; Schreuder, Robert
2013-04-01
To address the problem of low reading comprehension scores among children with hearing impairment, it is necessary to have a better understanding of their reading vocabulary. In this study, the authors investigated whether task and word type differentiate the reading vocabulary knowledge of children with and without severe hearing loss. Seventy-two children with hearing loss and 72 children with normal hearing performed a lexical and a use decision task. Both tasks contained the same 180 words divided over 7 clusters, each cluster containing words with a similar pattern of scores on 8 word properties (word class, frequency, morphological family size, length, age of acquisition, mode of acquisition, imageability, and familiarity). Whereas the children with normal hearing scored better on the 2 tasks than the children with hearing loss, the size of the difference varied depending on the type of task and word. Performance differences between the 2 groups increased as words and tasks became more complex. Despite delays, children with hearing loss showed a similar pattern of vocabulary acquisition as their peers with normal hearing. For the most precise assessment of reading vocabulary possible, a range of tasks and word types should be used.
Delayed auditory pathway maturation and prematurity.
Koenighofer, Martin; Parzefall, Thomas; Ramsebner, Reinhard; Lucas, Trevor; Frei, Klemens
2015-06-01
Hearing loss is the most common sensory disorder in developed countries and leads to a severe reduction in quality of life. In this uncontrolled case series, we evaluated the auditory development in patients suffering from congenital nonsyndromic hearing impairment related to preterm birth. Six patients delivered preterm (25th-35th gestational weeks) suffering from mild to profound congenital nonsyndromic hearing impairment, descending from healthy, nonconsanguineous parents and were evaluated by otoacoustic emissions, tympanometry, brainstem-evoked response audiometry, and genetic testing. All patients were treated with hearing aids, and one patient required cochlear implantation. One preterm infant (32nd gestational week) initially presented with a 70 dB hearing loss, accompanied by negative otoacoustic emissions and normal tympanometric findings. The patient was treated with hearing aids and displayed a gradual improvement in bilateral hearing that completely normalized by 14 months of age accompanied by the development of otoacoustic emission responses. Conclusions We present here for the first time a fully documented preterm patient with delayed auditory pathway maturation and normalization of hearing within 14 months of birth. Although rare, postpartum development of the auditory system should, therefore, be considered in the initial stages for treating preterm hearing impaired patients.
Processing of Acoustic Cues in Lexical-Tone Identification by Pediatric Cochlear-Implant Recipients
ERIC Educational Resources Information Center
Peng, Shu-Chen; Lu, Hui-Ping; Lu, Nelson; Lin, Yung-Song; Deroche, Mickael L. D.; Chatterjee, Monita
2017-01-01
Purpose: The objective was to investigate acoustic cue processing in lexical-tone recognition by pediatric cochlear-implant (CI) recipients who are native Mandarin speakers. Method: Lexical-tone recognition was assessed in pediatric CI recipients and listeners with normal hearing (NH) in 2 tasks. In Task 1, participants identified naturally…
Pittman, A L; Lewis, D E; Hoover, B M; Stelmachowicz, P G
2005-12-01
This study examined rapid word-learning in 5- to 14-year-old children with normal and impaired hearing. The effects of age and receptive vocabulary were examined as well as those of high-frequency amplification. Novel words were low-pass filtered at 4 kHz (typical of current amplification devices) and at 9 kHz. It was hypothesized that (1) the children with normal hearing would learn more words than the children with hearing loss, (2) word-learning would increase with age and receptive vocabulary for both groups, and (3) both groups would benefit from a broader frequency bandwidth. Sixty children with normal hearing and 37 children with moderate sensorineural hearing losses participated in this study. Each child viewed a 4-minute animated slideshow containing 8 nonsense words created using the 24 English consonant phonemes (3 consonants per word). Each word was repeated 3 times. Half of the 8 words were low-pass filtered at 4 kHz and half were filtered at 9 kHz. After viewing the story twice, each child was asked to identify the words from among pictures in the slide show. Before testing, a measure of current receptive vocabulary was obtained using the Peabody Picture Vocabulary Test (PPVT-III). The PPVT-III scores of the hearing-impaired children were consistently poorer than those of the normal-hearing children across the age range tested. A similar pattern of results was observed for word-learning in that the performance of the hearing-impaired children was significantly poorer than that of the normal-hearing children. Further analysis of the PPVT and word-learning scores suggested that although word-learning was reduced in the hearing-impaired children, their performance was consistent with their receptive vocabularies. Additionally, no correlation was found between overall performance and the age of identification, age of amplification, or years of amplification in the children with hearing loss. Results also revealed a small increase in performance for both groups in the extended bandwidth condition but the difference was not significant at the traditional p = 0.05 level. The ability to learn words rapidly appears to be poorer in children with hearing loss over a wide range of ages. These results coincide with the consistently poorer receptive vocabularies for these children. Neither the word-learning or receptive-vocabulary measures were related to the amplification histories of these children. Finally, providing an extended high-frequency bandwidth did not significantly improve rapid word-learning for either group with these stimuli.
Ricketts, Todd A; Picou, Erin M
2013-09-01
This study aimed to evaluate the potential utility of asymmetrical and symmetrical directional hearing aid fittings for school-age children in simulated classroom environments. This study also aimed to evaluate speech recognition performance of children with normal hearing in the same listening environments. Two groups of school-age children 11 to 17 years of age participated in this study. Twenty participants had normal hearing, and 29 participants had sensorineural hearing loss. Participants with hearing loss were fitted with behind-the-ear hearing aids with clinically appropriate venting and were tested in 3 hearing aid configurations: bilateral omnidirectional, bilateral directional, and asymmetrical directional microphones. Speech recognition testing was completed in each microphone configuration in 3 environments: Talker-Front, Talker-Back, and Question-Answer situations. During testing, the location of the speech signal changed, but participants were always seated in a noisy, moderately reverberant classroom-like room. For all conditions, results revealed expected effects of directional microphones on speech recognition performance. When the signal of interest was in front of the listener, bilateral directional microphone was best, and when the signal of interest was behind the listener, bilateral omnidirectional microphone was best. Performance with asymmetric directional microphones was between the 2 symmetrical conditions. The magnitudes of directional benefits and decrements were not significantly correlated. In comparison with their peers with normal hearing, children with hearing loss performed similarly to their peers with normal hearing when fitted with directional microphones and the speech was from the front. In contrast, children with normal hearing still outperformed children with hearing loss if the speech originated from behind, even when the children were fitted with the optimal hearing aid microphone mode for the situation. Bilateral directional microphones can be effective in improving speech recognition performance for children in the classroom, as long as child is facing the talker of interest. Bilateral directional microphones, however, can impair performance if the signal originates from behind a listener. However, these data suggest that the magnitude of decrement is not predictable from an individual's benefit. The results re-emphasize the importance of appropriate switching between microphone modes so children can take full advantage of directional benefits without being hurt by directional decrements. An asymmetric fitting limits decrements, but does not lead to maximum speech recognition scores when compared with the optimal symmetrical fitting. Therefore, the asymmetric mode may not be the best option as a default fitting for children in a classroom environment. While directional microphones improve performance for children with hearing loss, their performance in most conditions continues to be impaired relative to their normal-hearing peers, particularly when the signals of interest originate from behind or from an unpredictable location.
Mukari, Siti Zamratol-Mai Sarah; Umat, Cila; Razak, Ummu Athiyah Abdul
2011-07-01
The aim of the present study was to compare the benefit of monaural versus binaural ear-level frequency modulated (FM) fitting on speech perception in noise in children with normal hearing. Reception threshold for sentences (RTS) was measured in no-FM, monaural FM, and binaural FM conditions in 22 normally developing children with bilateral normal hearing, aged 8 to 9 years old. Data were gathered using the Pediatric Malay Hearing in Noise Test (P-MyHINT) with speech presented from front and multi-talker babble presented from 90°, 180°, 270° azimuths in a sound treated booth. The results revealed that the use of either monaural or binaural ear level FM receivers provided significantly better mean RTSs than the no-FM condition (P<0.001). However, binaural FM did not produce a significantly greater benefit in mean RTS than monaural fitting. The benefit of binaural over monaural FM varies across individuals; while binaural fitting provided better RTSs in about 50% of study subjects, there were those in whom binaural fitting resulted in either deterioration or no additional improvement compared to monaural FM fitting. The present study suggests that the use of monaural ear-level FM receivers in children with normal hearing might provide similar benefit as binaural use. Individual subjects' variations of binaural FM benefit over monaural FM suggests that the decision to employ monaural or binaural fitting should be individualized. It should be noted however, that the current study recruits typically developing normal hearing children. Future studies involving normal hearing children with high risk of having difficulty listening in noise is indicated to see if similar findings are obtained.
Rasetshwane, Daniel M.; Trevino, Andrea C.; Gombert, Jessa N.; Liebig-Trehearn, Lauren; Kopun, Judy G.; Jesteadt, Walt; Neely, Stephen T.; Gorga, Michael P.
2015-01-01
This study describes procedures for constructing equal-loudness contours (ELCs) in units of phons from categorical loudness scaling (CLS) data and characterizes the impact of hearing loss on these estimates of loudness. Additionally, this study developed a metric, level-dependent loudness loss, which uses CLS data to specify the deviation from normal loudness perception at various loudness levels and as function of frequency for an individual listener with hearing loss. CLS measurements were made in 87 participants with hearing loss and 61 participants with normal hearing. An assessment of the reliability of CLS measurements was conducted on a subset of the data. CLS measurements were reliable. There was a systematic increase in the slope of the low-level segment of the CLS functions with increase in the degree of hearing loss. ELCs derived from CLS measurements were similar to standardized ELCs (International Organization for Standardization, ISO 226:2003). The presence of hearing loss decreased the vertical spacing of the ELCs, reflecting loudness recruitment and reduced cochlear compression. Representing CLS data in phons may lead to wider acceptance of CLS measurements. Like the audiogram that specifies hearing loss at threshold, level-dependent loudness loss describes deficit for suprathreshold sounds. Such information may have implications for the fitting of hearing aids. PMID:25920842
Kam, Anna Chi Shan; Sung, John Ka Keung; Lee, Tan; Wong, Terence Ka Cheong; van Hasselt, Andrew
In this study, the authors evaluated the effect of personalized amplification on mobile phone speech recognition in people with and without hearing loss. This prospective study used double-blind, within-subjects, repeated measures, controlled trials to evaluate the effectiveness of applying personalized amplification based on the hearing level captured on the mobile device. The personalized amplification settings were created using modified one-third gain targets. The participants in this study included 100 adults of age between 20 and 78 years (60 with age-adjusted normal hearing and 40 with hearing loss). The performance of the participants with personalized amplification and standard settings was compared using both subjective and speech-perception measures. Speech recognition was measured in quiet and in noise using Cantonese disyllabic words. Subjective ratings on the quality, clarity, and comfortableness of the mobile signals were measured with an 11-point visual analog scale. Subjective preferences of the settings were also obtained by a paired-comparison procedure. The personalized amplification application provided better speech recognition via the mobile phone both in quiet and in noise for people with hearing impairment (improved 8 to 10%) and people with normal hearing (improved 1 to 4%). The improvement in speech recognition was significantly better for people with hearing impairment. When the average device output level was matched, more participants preferred to have the individualized gain than not to have it. The personalized amplification application has the potential to improve speech recognition for people with mild-to-moderate hearing loss, as well as people with normal hearing, in particular when listening in noisy environments.
Keshavarzi, Mahmoud; Goehring, Tobias; Zakis, Justin; Turner, Richard E; Moore, Brian C J
2018-01-01
Despite great advances in hearing-aid technology, users still experience problems with noise in windy environments. The potential benefits of using a deep recurrent neural network (RNN) for reducing wind noise were assessed. The RNN was trained using recordings of the output of the two microphones of a behind-the-ear hearing aid in response to male and female speech at various azimuths in the presence of noise produced by wind from various azimuths with a velocity of 3 m/s, using the "clean" speech as a reference. A paired-comparison procedure was used to compare all possible combinations of three conditions for subjective intelligibility and for sound quality or comfort. The conditions were unprocessed noisy speech, noisy speech processed using the RNN, and noisy speech that was high-pass filtered (which also reduced wind noise). Eighteen native English-speaking participants were tested, nine with normal hearing and nine with mild-to-moderate hearing impairment. Frequency-dependent linear amplification was provided for the latter. Processing using the RNN was significantly preferred over no processing by both subject groups for both subjective intelligibility and sound quality, although the magnitude of the preferences was small. High-pass filtering (HPF) was not significantly preferred over no processing. Although RNN was significantly preferred over HPF only for sound quality for the hearing-impaired participants, for the results as a whole, there was a preference for RNN over HPF. Overall, the results suggest that reduction of wind noise using an RNN is possible and might have beneficial effects when used in hearing aids.
Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Matthes, Katarina L; Meyer, Martin
2017-09-01
The present study investigates behavioral and electrophysiological auditory and cognitive-related plasticity in three groups of healthy older adults (60-77 years). Group 1 was moderately hearing-impaired, experienced hearing aid users, and fitted with new hearing aids using non-linear frequency compression (NLFC on); Group 2, also moderately hearing-impaired, used the same type of hearing aids but NLFC was switched off during the entire period of study duration (NLFC off); Group 3 represented individuals with age-appropriate hearing (NHO) as controls, who were not different in IQ, gender, or age from Group 1 and 2. At five measurement time points (M1-M5) across three months, a series of active oddball tasks were administered while EEG was recorded. The stimuli comprised syllables consisting of naturally high-pitched fricatives (/sh/, /s/, and /f/), which are hard to distinguish for individuals with presbycusis. By applying a data-driven microstate approach to obtain global field power (GFP) as a measure of processing effort, the modulations of perceptual (P50, N1, P2) and cognitive-related (N2b, P3b) auditory evoked potentials were calculated and subsequently related to behavioral changes (accuracy and reaction time) across time. All groups improved their performance across time, but NHO showed consistently higher accuracy and faster reaction times than the hearing-impaired groups, especially under difficult conditions. Electrophysiological results complemented this finding by demonstrating longer latencies in the P50 and the N1 peak in hearing aid users. Furthermore, the GFP of cognitive-related evoked potentials decreased from M1 to M2 in the NHO group, while a comparable decrease in the hearing-impaired group was only evident at M5. After twelve weeks of hearing aid use of eight hours each day, we found a significantly lower GFP in the P3b of the group with NLFC on as compared to the group with NLFC off. These findings suggest higher processing effort, as evidenced by higher GFP, in hearing-impaired individuals when compared to those with normal hearing, although the hearing-impaired show a decrease of processing effort after repeated stimulus exposure. In addition, our findings indicate that the acclimatization to a new hearing aid algorithm may take several weeks. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Ma, Xiaoran; McPherson, Bradley; Ma, Lian
2016-03-01
Objective Children with nonsyndromic cleft lip and/or palate often have a high prevalence of middle ear dysfunction. However, there are also indications that they may have a higher prevalence of (central) auditory processing disorder. This study used Fisher's Auditory Problems Checklist for caregivers to determine whether children with nonsyndromic cleft lip and/or palate have potentially more auditory processing difficulties compared with craniofacially normal children. Methods Caregivers of 147 school-aged children with nonsyndromic cleft lip and/or palate were recruited for the study. This group was divided into three subgroups: cleft lip, cleft palate, and cleft lip and palate. Caregivers of 60 craniofacially normal children were recruited as a control group. Hearing health tests were conducted to evaluate peripheral hearing. Caregivers of children who passed this assessment battery completed Fisher's Auditory Problems Checklist, which contains 25 questions related to behaviors linked to (central) auditory processing disorder. Results Children with cleft palate showed the lowest scores on the Fisher's Auditory Problems Checklist questionnaire, consistent with a higher index of suspicion for (central) auditory processing disorder. There was a significant difference in the manifestation of (central) auditory processing disorder-linked behaviors between the cleft palate and the control groups. The most common behaviors reported in the nonsyndromic cleft lip and/or palate group were short attention span and reduced learning motivation, along with hearing difficulties in noise. Conclusion A higher occurrence of (central) auditory processing disorder-linked behaviors were found in children with nonsyndromic cleft lip and/or palate, particularly cleft palate. Auditory processing abilities should not be ignored in children with nonsyndromic cleft lip and/or palate, and it is necessary to consider assessment tests for (central) auditory processing disorder when an auditory diagnosis is made for this population.
ERIC Educational Resources Information Center
Iliadou, Vasiliki; Bamiou, Doris Eva
2012-01-01
Purpose: To investigate the clinical utility of the Children's Auditory Processing Performance Scale (CHAPPS; Smoski, Brunt, & Tannahill, 1992) to evaluate listening ability in 12-year-old children referred for auditory processing assessment. Method: This was a prospective case control study of 97 children (age range = 11;4 [years;months] to…
dos Santos Filha, Valdete Alves Valentins; Samelli, Alessandra Giannella; Matas, Carla Gentile
2015-09-11
Tinnitus is an important occupational health concern, but few studies have focused on the central auditory pathways of workers with a history of occupational noise exposure. Thus, we analyzed the central auditory pathways of workers with a history of occupational noise exposure who had normal hearing threshold, and compared middle latency auditory evoked potential in those with and without noise-induced tinnitus. Sixty individuals (30 with and 30 without tinnitus) underwent the following procedures: anamnesis, immittance measures, pure-tone air conduction thresholds at all frequencies between 0.25-8 kHz, and middle latency auditory evoked potentials. Quantitative analysis of latencies and amplitudes of middle latency auditory evoked potential showed no significant differences between the groups with and without tinnitus. In the qualitative analysis, we found that both groups showed increased middle latency auditory evoked potential latencies. The study group had more alterations of the "both" type regarding the Na-Pa amplitude, while the control group had more "electrode effect" alterations, but these alterations were not significantly different when compared to controls. Individuals with normal hearing with or without tinnitus who are exposed to occupational noise have altered middle latency auditory evoked potential, suggesting impairment of the auditory pathways in cortical and subcortical regions. Although differences did not reach significance, individuals with tinnitus seemed to have more abnormalities in components of the middle latency auditory evoked potential when compared to individuals without tinnitus, suggesting alterations in the generation and transmission of neuroelectrical impulses along the auditory pathway.
Bernstein, Lynne E.; Lu, Zhong-Lin; Jiang, Jintao
2008-01-01
A fundamental question about human perception is how the speech perceiving brain combines auditory and visual phonetic stimulus information. We assumed that perceivers learn the normal relationship between acoustic and optical signals. We hypothesized that when the normal relationship is perturbed by mismatching the acoustic and optical signals, cortical areas responsible for audiovisual stimulus integration respond as a function of the magnitude of the mismatch. To test this hypothesis, in a previous study, we developed quantitative measures of acoustic-optical speech stimulus incongruity that correlate with perceptual measures. In the current study, we presented low incongruity (LI, matched), medium incongruity (MI, moderately mismatched), and high incongruity (HI, highly mismatched) audiovisual nonsense syllable stimuli during fMRI scanning. Perceptual responses differed as a function of the incongruity level, and BOLD measures were found to vary regionally and quantitatively with perceptual and quantitative incongruity levels. Each increase in level of incongruity resulted in an increase in overall levels of cortical activity and in additional activations. However, the only cortical region that demonstrated differential sensitivity to the three stimulus incongruity levels (HI > MI > LI) was a subarea of the left supramarginal gyrus (SMG). The left SMG might support a fine-grained analysis of the relationship between audiovisual phonetic input in comparison with stored knowledge, as hypothesized here. The methods here show that quantitative manipulation of stimulus incongruity is a new and powerful tool for disclosing the system that processes audiovisual speech stimuli. PMID:18495091
Effects of noise and working memory capacity on memory processing of speech for hearing-aid users.
Ng, Elaine Hoi Ning; Rudner, Mary; Lunner, Thomas; Pedersen, Michael Syskind; Rönnberg, Jerker
2013-07-01
It has been shown that noise reduction algorithms can reduce the negative effects of noise on memory processing in persons with normal hearing. The objective of the present study was to investigate whether a similar effect can be obtained for persons with hearing impairment and whether such an effect is dependent on individual differences in working memory capacity. A sentence-final word identification and recall (SWIR) test was conducted in two noise backgrounds with and without noise reduction as well as in quiet. Working memory capacity was measured using a reading span (RS) test. Twenty-six experienced hearing-aid users with moderate to moderately severe sensorineural hearing loss. Noise impaired recall performance. Competing speech disrupted memory performance more than speech-shaped noise. For late list items the disruptive effect of the competing speech background was virtually cancelled out by noise reduction for persons with high working memory capacity. Noise reduction can reduce the adverse effect of noise on memory for speech for persons with good working memory capacity. We argue that the mechanism behind this is faster word identification that enhances encoding into working memory.
Upward spread of informational masking in normal-hearing and hearing-impaired listeners
NASA Astrophysics Data System (ADS)
Alexander, Joshua M.; Lutfi, Robert A.
2003-04-01
Thresholds for pure-tone signals of 0.8, 2.0, and 5.0 kHz were measured in the presence of a simultaneous multitone masker in 15 normal-hearing and 8 hearing-impaired listeners. The masker consisted of fixed-frequency tones ranging from 522-8346 Hz at 1/3-octave intervals, excluding the 2/3-octave interval on either side of the signal. Masker uncertainty was manipulated by independently and randomly playing individual masker tones with probability p=0.5 or p=1.0 on each trial. Informational masking (IM) was estimated by the threshold difference (p=0.5 minus p=1.0). Decision weights were estimated from correlations of the listener's response with the occurrence of the signal and individual masker components on each trial. IM was greater for normal-hearing listeners than for hearing-impaired listeners, and most listeners had at least 10 dB of IM for one of the signal frequencies. For both groups, IM increased as the number of masker components below the signal frequency increased. Decision weights were also similar for both groups-masker frequencies below the signal were weighted more than those above. Implications are that normal-hearing and hearing-impaired individuals do not weight information differently in these masking conditions and that factors associated with listening may be partially responsible for the greater effectiveness of low-frequency maskers. [Work supported by NIDCD.
Sheft, Stanley; Norris, Molly; Spanos, George; Radasevich, Katherine; Formsma, Paige; Gygi, Brian
2016-01-01
Objective Sounds in everyday environments tend to follow one another as events unfold over time. The tacit knowledge of contextual relationships among environmental sounds can influence their perception. We examined the effect of semantic context on the identification of sequences of environmental sounds by adults of varying age and hearing abilities, with an aim to develop a nonspeech test of auditory cognition. Method The familiar environmental sound test (FEST) consisted of 25 individual sounds arranged into ten five-sound sequences: five contextually coherent and five incoherent. After hearing each sequence, listeners identified each sound and arranged them in the presentation order. FEST was administered to young normal-hearing, middle-to-older normal-hearing, and middle-to-older hearing-impaired adults (Experiment 1), and to postlingual cochlear-implant users and young normal-hearing adults tested through vocoder-simulated implants (Experiment 2). Results FEST scores revealed a strong positive effect of semantic context in all listener groups, with young normal-hearing listeners outperforming other groups. FEST scores also correlated with other measures of cognitive ability, and for CI users, with the intelligibility of speech-in-noise. Conclusions Being sensitive to semantic context effects, FEST can serve as a nonspeech test of auditory cognition for diverse listener populations to assess and potentially improve everyday listening skills. PMID:27893791
Plasticity of spatial hearing: behavioural effects of cortical inactivation
Nodal, Fernando R; Bajo, Victoria M; King, Andrew J
2012-01-01
The contribution of auditory cortex to spatial information processing was explored behaviourally in adult ferrets by reversibly deactivating different cortical areas by subdural placement of a polymer that released the GABAA agonist muscimol over a period of weeks. The spatial extent and time course of cortical inactivation were determined electrophysiologically. Muscimol-Elvax was placed bilaterally over the anterior (AEG), middle (MEG) or posterior ectosylvian gyrus (PEG), so that different regions of the auditory cortex could be deactivated in different cases. Sound localization accuracy in the horizontal plane was assessed by measuring both the initial head orienting and approach-to-target responses made by the animals. Head orienting behaviour was unaffected by silencing any region of the auditory cortex, whereas the accuracy of approach-to-target responses to brief sounds (40 ms noise bursts) was reduced by muscimol-Elvax but not by drug-free implants. Modest but significant localization impairments were observed after deactivating the MEG, AEG or PEG, although the largest deficits were produced in animals in which the MEG, where the primary auditory fields are located, was silenced. We also examined experience-induced spatial plasticity by reversibly plugging one ear. In control animals, localization accuracy for both approach-to-target and head orienting responses was initially impaired by monaural occlusion, but recovered with training over the next few days. Deactivating any part of the auditory cortex resulted in less complete recovery than in controls, with the largest deficits observed after silencing the higher-level cortical areas in the AEG and PEG. Although suggesting that each region of auditory cortex contributes to spatial learning, differences in the localization deficits and degree of adaptation between groups imply a regional specialization in the processing of spatial information across the auditory cortex. PMID:22547635
Perception of Musical Emotion in the Students with Cognitive and Acquired Hearing Loss.
Mazaheryazdi, Malihe; Aghasoleimani, Mina; Karimi, Maryam; Arjmand, Pirooz
2018-01-01
Hearing loss can affect the perception of emotional reaction to the music. The present study investigated whether the students with congenital hearing loss exposed to the deaf culture, percept the same emotion from the music as students with acquired hearing loss. Participants were divided into two groups; 30 students with bilaterally congenital moderate to severe hearing loss that were selected from deaf schools located in Tehran, Iran and 30 students with an acquired hearing loss with the same degree of hearing loss selected from Amiralam Hospital, Tehran, Iran and compared with the group of 30 age and gender-matched normal hearing subjects served our control in 2012. The musical stimuli consisted of three different sequences of music, (sadness, happiness, and fear) each with the duration of 60 sec. The students were asked to point to the lists of words that best matched with their emotions. Emotional perception of sadness, happiness, and fear in congenital hearing loss children was significantly poorly than acquired hearing loss and normal hearing group ( P <0.001). There was no significant difference in the emotional perception of sadness, happiness, and fear among the group of acquired hearing loss and normal hearing group ( P =0.75), ( P =1) and ( P =0.16) respectively. Neural plasticity induced by hearing assistant devises may be affected by the time when a hearing aid was first fitted and how the auditory system responds to the reintroduction of certain sounds via amplification. Therefore, children who experienced auditory input of different sound patterns in their early childhood will show more perceptual flexibility in different situations than the children with congenital hearing loss and Deaf culture.
Papsin, Blake C.; Paludetti, Gaetano; Gordon, Karen A.
2015-01-01
Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in cochlear implant technology. PMID:26317976
Giannantonio, Sara; Polonenko, Melissa J; Papsin, Blake C; Paludetti, Gaetano; Gordon, Karen A
2015-01-01
Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in cochlear implant technology.
Wang, Quanxin; Sporns, Olaf; Burkhalter, Andreas
2012-01-01
Much of the information used for visual perception and visually guided actions is processed in complex networks of connections within the cortex. To understand how this works in the normal brain and to determine the impact of disease, mice are promising models. In primate visual cortex, information is processed in a dorsal stream specialized for visuospatial processing and guided action and a ventral stream for object recognition. Here, we traced the outputs of 10 visual areas and used quantitative graph analytic tools of modern network science to determine, from the projection strengths in 39 cortical targets, the community structure of the network. We found a high density of the cortical graph that exceeded that previously shown in monkey. Each source area showed a unique distribution of projection weights across its targets (i.e. connectivity profile) that was well-fit by a lognormal function. Importantly, the community structure was strongly dependent on the location of the source area: outputs from medial/anterior extrastriate areas were more strongly linked to parietal, motor and limbic cortex, whereas lateral extrastriate areas were preferentially connected to temporal and parahippocampal cortex. These two subnetworks resemble dorsal and ventral cortical streams in primates, demonstrating that the basic layout of cortical networks is conserved across species. PMID:22457489
Bernardoni, Fabio; King, Joseph A; Geisler, Daniel; Stein, Elisa; Jaite, Charlotte; Nätsch, Dagmar; Tam, Friederike I; Boehm, Ilka; Seidel, Maria; Roessner, Veit; Ehrlich, Stefan
2016-04-15
Structural magnetic resonance imaging studies have documented reduced gray matter in acutely ill patients with anorexia nervosa to be at least partially reversible following weight restoration. However, few longitudinal studies exist and the underlying mechanisms of these structural changes are elusive. In particular, the relative speed and completeness of brain structure normalization during realimentation remain unknown. Here we report from a structural neuroimaging study including a sample of adolescent/young adult female patients with acute anorexia nervosa (n=47), long-term recovered patients (n=34), and healthy controls (n=75). The majority of acutely ill patients were scanned longitudinally (n=35): at the beginning of standardized weight restoration therapy and again after partial weight normalization (>10% body mass index increase). High-resolution structural images were processed and analyzed with the longitudinal stream of FreeSurfer software to test for changes in cortical thickness and volumes of select subcortical regions of interest. We found globally reduced cortical thickness in acutely ill patients to increase rapidly (0.06 mm/month) during brief weight restoration therapy (≈3 months). This significant increase was predicted by weight restoration alone and could not be ascribed to potentially mediating factors such as duration of illness, hydration status, or symptom improvements. By comparing cortical thickness in partially weight-restored patients with that measured in healthy controls, we confirmed that cortical thickness had normalized already at follow-up. This pattern of thinning in illness and rapid normalization during weight rehabilitation was largely mirrored in subcortical volumes. Together, our findings indicate that structural brain insults inflicted by starvation in anorexia nervosa may be reversed at a rate much faster than previously thought if interventions are successful before the disorder becomes chronic. This provides evidence drawing previously speculated mechanisms such as (de-)hydration and neurogenesis into question and suggests that neuronal and/or glial remodeling including changes in macromolecular content may underlie the gray matter alterations observed in anorexia nervosa. Copyright © 2016 Elsevier Inc. All rights reserved.
Safety of the HyperSound® Audio System in Subjects with Normal Hearing.
Mehta, Ritvik P; Mattson, Sara L; Kappus, Brian A; Seitzman, Robin L
2015-06-11
The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions.
Safety of the HyperSound® Audio System in Subjects with Normal Hearing
Mattson, Sara L.; Kappus, Brian A.; Seitzman, Robin L.
2015-01-01
The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions. PMID:26779330
The Effect of Tinnitus on Listening Effort in Normal-Hearing Young Adults: A Preliminary Study
ERIC Educational Resources Information Center
Degeest, Sofie; Keppler, Hannah; Corthals, Paul
2017-01-01
Purpose: The objective of this study was to investigate the effect of chronic tinnitus on listening effort. Method: Thirteen normal-hearing young adults with chronic tinnitus were matched with a control group for age, gender, hearing thresholds, and educational level. A dual-task paradigm was used to evaluate listening effort in different…
ERIC Educational Resources Information Center
Zupan, Barbra; Sussman, Joan E.
2009-01-01
Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…
Effects of Age and Hearing Loss on Gap Detection and the Precedence Effect: Broadband Stimuli
ERIC Educational Resources Information Center
Roberts, Richard A.; Lister, Jennifer J.
2004-01-01
Older listeners with normal-hearing sensitivity and impaired-hearing sensitivity often demonstrate poorer-than-normal performance on tasks of speech understanding in noise and reverberation. Deficits in temporal resolution and in the precedence effect may underlie this difficulty. Temporal resolution is often studied by means of a gap-detection…
Souza, Pamela; Arehart, Kathryn; Miller, Christi Wise; Muralimanohar, Ramesh Kumar
2010-01-01
Objectives Recent research suggests that older listeners may have difficulty processing information related to the fundamental frequency (F0) of voiced speech. In this study, the focus was on the mechanisms that may underlie this reduced ability. We examined whether increased age resulted in decreased ability to perceive F0 using fine structure cues provided by the harmonic structure of voiced speech sounds and/or cues provided by high-rate envelope fluctuations (periodicity). Design Younger listeners with normal hearing and older listeners with normal to near-normal hearing completed two tasks of F0 perception. In the first task (steady-state F0), the fundamental frequency difference limen (F0DL) was measured adaptively for synthetic vowel stimuli. In the second task (time-varying F0), listeners relied on variations in F0 to judge intonation of synthetic diphthongs. For both tasks, three processing conditions were created: 8-channel vocoding which preserved periodicity cues to F0; a simulated electroacoustic stimulation condition, which consisted of high-frequency vocoder processing combined with a low-pass filtered portion, and offered both periodicity and fine-structure cues to F0; and an unprocessed condition. Results F0 difference limens for steady-state vowel sounds and the ability to discern rising and falling intonations were significantly worse in the older subjects compared to the younger subjects. For both older and younger listeners scores were lowest for the vocoded condition, and there was no difference in scores between the unprocessed and electroacoustic simulation conditions. Conclusions Older listeners had difficulty using periodicity cues to obtain information related to talker fundamental frequency. However, performance was improved by combining periodicity cues with (low-frequency) acoustic information, and that strategy should be considered in individuals who are appropriate candidates for such processing. For cochlear implant candidates, that effect might be achieved by partial electrode insertion providing acoustic stimulation in the low frequencies; or by the combination of a traditional implant in one ear and a hearing aid in the opposite ear. PMID:20739892
Aizenberg, Mark; Mwilambwe-Tshilobo, Laetitia; Briguglio, John J.; Natan, Ryan G.; Geffen, Maria N.
2015-01-01
The ability to discriminate tones of different frequencies is fundamentally important for everyday hearing. While neurons in the primary auditory cortex (AC) respond differentially to tones of different frequencies, whether and how AC regulates auditory behaviors that rely on frequency discrimination remains poorly understood. Here, we find that the level of activity of inhibitory neurons in AC controls frequency specificity in innate and learned auditory behaviors that rely on frequency discrimination. Photoactivation of parvalbumin-positive interneurons (PVs) improved the ability of the mouse to detect a shift in tone frequency, whereas photosuppression of PVs impaired the performance. Furthermore, photosuppression of PVs during discriminative auditory fear conditioning increased generalization of conditioned response across tone frequencies, whereas PV photoactivation preserved normal specificity of learning. The observed changes in behavioral performance were correlated with bidirectional changes in the magnitude of tone-evoked responses, consistent with predictions of a model of a coupled excitatory-inhibitory cortical network. Direct photoactivation of excitatory neurons, which did not change tone-evoked response magnitude, did not affect behavioral performance in either task. Our results identify a new function for inhibition in the auditory cortex, demonstrating that it can improve or impair acuity of innate and learned auditory behaviors that rely on frequency discrimination. PMID:26629746
Bae, Seongryu; Lee, Sangyoon; Lee, Sungchul; Jung, Songee; Makino, Keitaro; Park, Hyuntae; Shimada, Hiroyuki
2018-06-01
We examined the role of social frailty in the association between hearing problems and mild cognitive impairment (MCI), and investigated which cognitive impairment domains are most strongly involved. Participants were 4251 older adults (mean age 72.5 ± 5.2 years, 46.1% male) who met the study inclusion criteria. Hearing problems were measured using the Hearing Handicap Inventory for the Elderly. Social frailty was identified using responses to five questions. Participants were divided into four groups depending on the presence of social frailty and hearing problems: control, social frailty, hearing problem, and co-occurrence. We assessed memory, attention, executive function, and processing speed using the National Center for Geriatrics and Gerontology-Functional Assessment Tool. Participants were categorized into normal cognition, single- and multiple-domain MCI, depending on the number of impaired cognitive domains. Participants with multiple-domain MCI exhibited the highest odds ratios (OR) of the co-occurrence group (OR: 3.89, 95% confidence intervals [CI]: 1.96-7.72), followed by the social frailty (OR: 2.65, 95% CI: 1.49-4.67), and hearing problem (OR: 1.90, 95% CI: 1.08-3.34) groups, compared with the control group. However, single-domain MCI was not significantly associated with any group. Cognitive domain analysis revealed that impaired executive function and processing speed were associated with the co-occurrence, hearing problem, and social frailty groups, respectively. Social frailty and hearing problems were independently associated with multiple-domain MCI. Comorbid conditions were more strongly associated with multiple-domain MCI. Longitudinal studies are needed to elucidate the causal role of social frailty in the association between hearing impairment and MCI. Copyright © 2018 Elsevier B.V. All rights reserved.
The impact of aging and hearing status on verbal short-term memory.
Verhaegen, Clémence; Collette, Fabienne; Majerus, Steve
2014-01-01
The aim of this study is to assess the impact of hearing status on age-related decrease in verbal short-term memory (STM) performance. This was done by administering a battery of verbal STM tasks to elderly and young adult participants matched for hearing thresholds, as well as to young normal-hearing control participants. The matching procedure allowed us to assess the importance of hearing loss as an explanatory factor of age-related STM decline. We observed that elderly participants and hearing-matched young participants showed equal levels of performance in all verbal STM tasks, and performed overall lower than the normal-hearing young control participants. This study provides evidence for recent theoretical accounts considering reduced hearing level as an important explanatory factor of poor auditory-verbal STM performance in older adults.
Neural Alterations in Acquired Age-Related Hearing Loss
Mudar, Raksha A.; Husain, Fatima T.
2016-01-01
Hearing loss is one of the most prevalent chronic health conditions in older adults. Growing evidence suggests that hearing loss is associated with reduced cognitive functioning and incident dementia. In this mini-review, we briefly examine literature on anatomical and functional alterations in the brains of adults with acquired age-associated hearing loss, which may underlie the cognitive consequences observed in this population, focusing on studies that have used structural and functional magnetic resonance imaging, diffusion tensor imaging, and event-related electroencephalography. We discuss structural and functional alterations observed in the temporal and frontal cortices and the limbic system. These neural alterations are discussed in the context of common cause, information-degradation, and sensory-deprivation hypotheses, and we suggest possible rehabilitation strategies. Although, we are beginning to learn more about changes in neural architecture and functionality related to age-associated hearing loss, much work remains to be done. Understanding the neural alterations will provide objective markers for early identification of neural consequences of age-associated hearing loss and for evaluating benefits of intervention approaches. PMID:27313556
Postpartum cortical blindness.
Faiz, Shakeel Ahmed
2008-09-01
A 30-years-old third gravida with previous normal pregnancies and an unremarkable prenatal course had an emergency lower segment caesarean section at a periphery hospital for failure of labour to progress. She developed bilateral cortical blindness immediately after recovery from anesthesia due to cerebral angiopathy shown by CT and MR scan as cortical infarct cerebral angiopathy, which is a rare complication of a normal pregnancy.
Flanagan, Sheila; Zorilă, Tudor-Cătălin; Stylianou, Yannis; Moore, Brian C J
2018-01-01
Auditory processing disorder (APD) may be diagnosed when a child has listening difficulties but has normal audiometric thresholds. For adults with normal hearing and with mild-to-moderate hearing impairment, an algorithm called spectral shaping with dynamic range compression (SSDRC) has been shown to increase the intelligibility of speech when background noise is added after the processing. Here, we assessed the effect of such processing using 8 children with APD and 10 age-matched control children. The loudness of the processed and unprocessed sentences was matched using a loudness model. The task was to repeat back sentences produced by a female speaker when presented with either speech-shaped noise (SSN) or a male competing speaker (CS) at two signal-to-background ratios (SBRs). Speech identification was significantly better with SSDRC processing than without, for both groups. The benefit of SSDRC processing was greater for the SSN than for the CS background. For the SSN, scores were similar for the two groups at both SBRs. For the CS, the APD group performed significantly more poorly than the control group. The overall improvement produced by SSDRC processing could be useful for enhancing communication in a classroom where the teacher's voice is broadcast using a wireless system.
An overview of neural function and feedback control in human communication.
Hood, L J
1998-01-01
The speech and hearing mechanisms depend on accurate sensory information and intact feedback mechanisms to facilitate communication. This article provides a brief overview of some components of the nervous system important for human communication and some electrophysiological methods used to measure cortical function in humans. An overview of automatic control and feedback mechanisms in general and as they pertain to the speech motor system and control of the hearing periphery is also presented, along with a discussion of how the speech and auditory systems interact.
Turğut, Nedim; Karlıdağ, Turgut; Başar, Figen; Yalçın, Şinasi; Kaygusuz, İrfan; Keleş, Erol; Birkent, Ömer Faruk
2015-01-01
This study aims to review the relationship between written language skills and factors which are thought to affect this skill such as mean hearing loss, duration of auditory deprivation, speech discrimination score, and pre-school education attendance and socioeconomic status of hearing impaired children who attend 4th-7th grades in primary school in inclusive environment. The study included 25 hearing impaired children (14 males, 11 females; mean age 11.4±1.4 years; range 10 to 14 years) (study group) and 20 children (9 males, 11 females; mean age 11.5±1.3 years; range 10 to 14 years) (control group) with normal hearing in the same age group and studying in the same class. Study group was separated into two subgroups as group 1a and group 1b since some of the children with hearing disability used hearing aid while some used cochlear implant. Intragroup comparisons and relational screening were performed for those who use hearing aids and cochlear implants. Intergroup comparisons were performed to evaluate the effect of the parameters on written language skills. Written expression skill level of children with hearing disability was significantly lower than their normal hearing peers (p=0.001). A significant relationship was detected between written language skills and mean hearing loss (p=0.048), duration of auditory deprivation (p=0.021), speech discrimination score (p=0.014), and preschool attendance (p=0.005), when it comes to socioeconomic status we were not able to find any significant relationship (p=0.636). It can be said that hearing loss affects written language skills negatively and hearing impaired individuals develop low-level written language skills compared to their normal hearing peers.
van den Tillaart-Haverkate, Maj; de Ronde-Brons, Inge; Dreschler, Wouter A; Houben, Rolph
2017-01-01
Single-microphone noise reduction leads to subjective benefit, but not to objective improvements in speech intelligibility. We investigated whether response times (RTs) provide an objective measure of the benefit of noise reduction and whether the effect of noise reduction is reflected in rated listening effort. Twelve normal-hearing participants listened to digit triplets that were either unprocessed or processed with one of two noise-reduction algorithms: an ideal binary mask (IBM) and a more realistic minimum mean square error estimator (MMSE). For each of these three processing conditions, we measured (a) speech intelligibility, (b) RTs on two different tasks (identification of the last digit and arithmetic summation of the first and last digit), and (c) subjective listening effort ratings. All measurements were performed at four signal-to-noise ratios (SNRs): -5, 0, +5, and +∞ dB. Speech intelligibility was high (>97% correct) for all conditions. A significant decrease in response time, relative to the unprocessed condition, was found for both IBM and MMSE for the arithmetic but not the identification task. Listening effort ratings were significantly lower for IBM than for MMSE and unprocessed speech in noise. We conclude that RT for an arithmetic task can provide an objective measure of the benefit of noise reduction. For young normal-hearing listeners, both ideal and realistic noise reduction can reduce RTs at SNRs where speech intelligibility is close to 100%. Ideal noise reduction can also reduce perceived listening effort.
Primary cortical folding in the human newborn: an early marker of later functional development.
Dubois, J; Benders, M; Borradori-Tolsa, C; Cachia, A; Lazeyras, F; Ha-Vinh Leuchter, R; Sizonenko, S V; Warfield, S K; Mangin, J F; Hüppi, P S
2008-08-01
In the human brain, the morphology of cortical gyri and sulci is complex and variable among individuals, and it may reflect pathological functioning with specific abnormalities observed in certain developmental and neuropsychiatric disorders. Since cortical folding occurs early during brain development, these structural abnormalities might be present long before the appearance of functional symptoms. So far, the precise mechanisms responsible for such alteration in the convolution pattern during intra-uterine or post-natal development are still poorly understood. Here we compared anatomical and functional brain development in vivo among 45 premature newborns who experienced different intra-uterine environments: 22 normal singletons, 12 twins and 11 newborns with intrauterine growth restriction (IUGR). Using magnetic resonance imaging (MRI) and dedicated post-processing tools, we investigated early disturbances in cortical formation at birth, over the developmental period critical for the emergence of convolutions (26-36 weeks of gestational age), and defined early 'endophenotypes' of sulcal development. We demonstrated that twins have a delayed but harmonious maturation, with reduced surface and sulcation index compared to singletons, whereas the gyrification of IUGR newborns is discordant to the normal developmental trajectory, with a more pronounced reduction of surface in relation to the sulcation index compared to normal newborns. Furthermore, we showed that these structural measurements of the brain at birth are predictors of infants' outcome at term equivalent age, for MRI-based cerebral volumes and neurobehavioural development evaluated with the assessment of preterm infant's behaviour (APIB).
Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets.
Meredith, M Alex; Allman, Brian L
2015-03-01
The recent findings in several species that the primary auditory cortex processes non-auditory information have largely overlooked the possibility of somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior auditory field and primary auditory cortex) for tactile responsivity. Multiple single-unit recordings from anesthetised ferret cortex yielded histologically verified neurons (n = 311) tested with electronically controlled auditory, visual and tactile stimuli, and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in the core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Harris, Kelly C.; Wilson, Sara; Eckert, Mark A.; Dubno, Judy R.
2011-01-01
Objectives The goal of this study was to examine the degree to which age-related differences in early or automatic levels of auditory processing and attention-related processes explain age-related differences in auditory temporal processing. We hypothesized that age-related differences in attention and cognition compound age-related differences at automatic levels of processing, contributing to the robust age effects observed during challenging listening tasks. Design We examined age-related and individual differences in cortical event-related potential (ERP) amplitudes and latencies, processing speed, and gap detection from twenty-five younger and twenty-five older adults with normal hearing. ERPs were elicited by brief silent periods (gaps) in an otherwise continuous broadband noise and were measured under two listening conditions, passive and active. During passive listening, participants ignored the stimulus and read quietly. During active listening, participants button pressed each time they detected a gap. Gap detection (percent detected) was calculated for each gap duration during active listening (3, 6, 9, 12 and 15 ms). Processing speed was assessed using the Purdue Pegboard test and the Connections Test. Repeated measures ANOVAs assessed effects of age on gap detection, processing speed, and ERP amplitudes and latencies. An “attention modulation” construct was created using linear regression to examine the effects of attention while controlling for age-related differences in auditory processing. Pearson correlation analyses assessed the extent to which attention modulation, ERPs, and processing speed predicted behavioral gap detection. Results: Older adults had significantly poorer gap detection and slower processing speed than younger adults. Even after adjusting for poorer gap detection, the neurophysiological response to gap onset was atypical in older adults with reduced P2 amplitudes and virtually absent N2 responses. Moreover, individual differences in attention modulation of P2 response latencies and N2 amplitudes predicted gap detection and processing speed in older adults. That is, older adults with P2 latencies that decreased and N2 amplitudes that increased with active listening had faster processing speed and better gap detection than those older adults whose P2 latencies increased and N2 amplitudes decreased with attention Conclusions Results from the current study are broadly consistent with previous findings that older adults exhibit significantly poorer gap detection than younger adults in challenging tasks. Even after adjusting for poorer gap detection, older and younger adults showed robust differences in their electrophysiological responses to sound offset. Furthermore, the degree to which attention modulated the ERP was associated with individual variation in measures of processing speed and gap detection. Taken together, these results suggests an age-related deficit in early or automatic levels of auditory temporal processing and that some older adults may be less able to compensate for declines in processing by attending to the stimulus. These results extend our previous findings and support the hypothesis that age-related differences in cognitive or attention-related processing, including processing speed, contribute to an age-related decrease in gap detection. PMID:22374321
Castiglione, Alessandro; Benatti, Alice; Velardita, Carmelita; Favaro, Diego; Padoan, Elisa; Severi, Daniele; Pagliaro, Michela; Bovo, Roberto; Vallesi, Antonino; Gabelli, Carlo; Martini, Alessandro
2016-01-01
A growing interest in cognitive effects associated with speech and hearing processes is spreading throughout the scientific community essentially guided by evidence that central and peripheral hearing loss is associated with cognitive decline. For the present research, 125 participants older than 65 years of age (105 with hearing impairment and 20 with normal hearing) were enrolled, divided into 6 groups according to their degree of hearing loss and assessed to determine the effects of the treatment applied. Patients in our research program routinely undergo an extensive audiological and cognitive evaluation protocol providing results from the Digit Span test, Stroop color-word test, Montreal Cognitive Assessment and Geriatric Depression Scale, before and after rehabilitation. Data analysis was performed for a cross-sectional and longitudinal study of the outcomes for the different treatment groups. Each group demonstrated improvement after auditory rehabilitation or training on short- and long-term memory tasks, level of depression and cognitive status scores. Auditory rehabilitation by cochlear implants or hearing aids is effective also among older adults (median age of 74 years) with different degrees of hearing loss, and enables positive improvements in terms of social isolation, depression and cognitive performance. © 2016 The Author(s) Published by S. Karger AG, Basel.
Cortical bone thickening in Type A posterior atlas arch defects: experimental report.
Sanchis-Gimeno, Juan A; Llido, Susanna; Guede, David; Martinez-Soriano, Francisco; Ramon Caeiro, Jose; Blanco-Perez, Esther
2017-03-01
To date, no information about the cortical bone microstructural properties in atlas vertebrae with posterior arch defects has been reported. To test if there is an increased cortical bone thickening in atlases with Type A posterior atlas arch defects in an experimental model. Micro-computed tomography (CT) study on cadaveric atlas vertebrae. We analyzed the cortical bone thickness, the cortical volume, and the medullary volume (SkyScan 1172 Bruker micro-CT NV, Kontich, Belgium) in cadaveric dry vertebrae with a Type A atlas arch defect and normal control vertebrae. The micro-CT study revealed significant differences in cortical bone thickness (p=.005), cortical volume (p=.003), and medullary volume (p=.009) values between the normal and the Type A vertebrae. Type A congenital atlas arch defects present a cortical bone thickening that may play a protective role against atlas fractures. Copyright © 2016 Elsevier Inc. All rights reserved.
Cortical processes of speech illusions in the general population.
Schepers, E; Bodar, L; van Os, J; Lousberg, R
2016-10-18
There is evidence that experimentally elicited auditory illusions in the general population index risk for psychotic symptoms. As little is known about underlying cortical mechanisms of auditory illusions, an experiment was conducted to analyze processing of auditory illusions in a general population sample. In a follow-up design with two measurement moments (baseline and 6 months), participants (n = 83) underwent the White Noise task under simultaneous recording with a 14-lead EEG. An auditory illusion was defined as hearing any speech in a sound fragment containing white noise. A total number of 256 speech illusions (SI) were observed over the two measurements, with a high degree of stability of SI over time. There were 7 main effects of speech illusion on the EEG alpha band-the most significant indicating a decrease in activity at T3 (t = -4.05). Other EEG frequency bands (slow beta, fast beta, gamma, delta, theta) showed no significant associations with SI. SIs are characterized by reduced alpha activity in non-clinical populations. Given the association of SIs with psychosis, follow-up research is required to examine the possibility of reduced alpha activity mediating SIs in high risk and symptomatic populations.
Zupan, Barbra; Sussman, Joan E
2009-01-01
Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.
The effect of noise-induced hearing loss on the intelligibility of speech in noise
NASA Astrophysics Data System (ADS)
Smoorenburg, G. F.; Delaat, J. A. P. M.; Plomp, R.
1981-06-01
Speech reception thresholds, both in quiet and in noise, and tone audiograms were measured for 14 normal ears (7 subjects) and 44 ears (22 subjects) with noise-induced hearing loss. Maximum hearing loss in the 4-6 kHz region equalled 40 to 90 dB (losses exceeded by 90% and 10%, respectively). Hearing loss for speech in quiet measured with respect to the median speech reception threshold for normal ears ranged from 1.8 dB to 13.4 dB. For speech in noise the numbers are 1.2 dB to 7.0 dB which means that the subjects with noise-induced hearing loss need a 1.2 to 7.0 dB higher signal-to-noise ratio than normal to understand sentences equally well. A hearing loss for speech of 1 dB corresponds to a decrease in sentence intelligibility of 15 to 20%. The relation between hearing handicap conceived as a reduced ability to understand speech and tone audiogram is discussed. The higher signal-to-noise ratio needed by people with noise-induced hearing loss to understand speech in noisy environments is shown to be due partly to the decreased bandwidth of their hearing caused by the noise dip.
Effects of Long-Term Musical Training on Cortical Auditory Evoked Potentials.
Brown, Carolyn J; Jeon, Eun-Kyung; Driscoll, Virginia; Mussoi, Bruna; Deshpande, Shruti Balvalli; Gfeller, Kate; Abbas, Paul J
Evidence suggests that musicians, as a group, have superior frequency resolution abilities when compared with nonmusicians. It is possible to assess auditory discrimination using either behavioral or electrophysiologic methods. The purpose of this study was to determine if the acoustic change complex (ACC) is sensitive enough to reflect the differences in spectral processing exhibited by musicians and nonmusicians. Twenty individuals (10 musicians and 10 nonmusicians) participated in this study. Pitch and spectral ripple discrimination were assessed using both behavioral and electrophysiologic methods. Behavioral measures were obtained using a standard three interval, forced choice procedure. The ACC was recorded and used as an objective (i.e., nonbehavioral) measure of discrimination between two auditory signals. The same stimuli were used for both psychophysical and electrophysiologic testing. As a group, musicians were able to detect smaller changes in pitch than nonmusician. They also were able to detect a shift in the position of the peaks and valleys in a ripple noise stimulus at higher ripple densities than non-musicians. ACC responses recorded from musicians were larger than those recorded from non-musicians when the amplitude of the ACC response was normalized to the amplitude of the onset response in each stimulus pair. Visual detection thresholds derived from the evoked potential data were better for musicians than non-musicians regardless of whether the task was discrimination of musical pitch or detection of a change in the frequency spectrum of the ripple noise stimuli. Behavioral measures of discrimination were generally more sensitive than the electrophysiologic measures; however, the two metrics were correlated. Perhaps as a result of extensive training, musicians are better able to discriminate spectrally complex acoustic signals than nonmusicians. Those differences are evident not only in perceptual/behavioral tests but also in electrophysiologic measures of neural response at the level of the auditory cortex. While these results are based on observations made from normal-hearing listeners, they suggest that the ACC may provide a non-behavioral method of assessing auditory discrimination and as a result might prove useful in future studies that explore the efficacy of participation in a musically based, auditory training program perhaps geared toward pediatric or hearing-impaired listeners.
Blanks, Deidra A.; Buss, Emily; Grose, John H.; Fitzpatrick, Douglas C.; Hall, Joseph W.
2009-01-01
Objectives The present study investigated interaural time discrimination for binaurally mismatched carrier frequencies in listeners with normal hearing. One goal of the investigation was to gain insights into binaural hearing in patients with bilateral cochlear implants, where the coding of interaural time differences may be limited by mismatches in the neural populations receiving stimulation on each side. Design Temporal envelopes were manipulated to present low frequency timing cues to high frequency auditory channels. Carrier frequencies near 4 kHz were amplitude modulated at 128 Hz via multiplication with a half-wave rectified sinusoid, and that modulation was either in-phase across ears or delayed to one ear. Detection thresholds for non-zero interaural time differences were measured for a range of stimulus levels and a range of carrier frequency mismatches. Data were also collected under conditions designed to limit cues based on stimulus spectral spread, including masking and truncation of sidebands associated with modulation. Results Listeners with normal hearing can detect interaural time differences in the face of substantial mismatches in carrier frequency across ears. Conclusions The processing of interaural time differences in listeners with normal hearing is likely based on spread of excitation into binaurally matched auditory channels. Sensitivity to interaural time differences in listeners with cochlear implants may depend upon spread of current that results in the stimulation of neural populations that share common tonotopic space bilaterally. PMID:18596646
Kreft, Heather A.
2014-01-01
Under normal conditions, human speech is remarkably robust to degradation by noise and other distortions. However, people with hearing loss, including those with cochlear implants, often experience great difficulty in understanding speech in noisy environments. Recent work with normal-hearing listeners has shown that the amplitude fluctuations inherent in noise contribute strongly to the masking of speech. In contrast, this study shows that speech perception via a cochlear implant is unaffected by the inherent temporal fluctuations of noise. This qualitative difference between acoustic and electric auditory perception does not seem to be due to differences in underlying temporal acuity but can instead be explained by the poorer spectral resolution of cochlear implants, relative to the normally functioning ear, which leads to an effective smoothing of the inherent temporal-envelope fluctuations of noise. The outcome suggests an unexpected trade-off between the detrimental effects of poorer spectral resolution and the beneficial effects of a smoother noise temporal envelope. This trade-off provides an explanation for the long-standing puzzle of why strong correlations between speech understanding and spectral resolution have remained elusive. The results also provide a potential explanation for why cochlear-implant users and hearing-impaired listeners exhibit reduced or absent masking release when large and relatively slow temporal fluctuations are introduced in noise maskers. The multitone maskers used here may provide an effective new diagnostic tool for assessing functional hearing loss and reduced spectral resolution. PMID:25315376
Human neuromagnetic steady-state responses to amplitude-modulated tones, speech, and music.
Lamminmäki, Satu; Parkkonen, Lauri; Hari, Riitta
2014-01-01
Auditory steady-state responses that can be elicited by various periodic sounds inform about subcortical and early cortical auditory processing. Steady-state responses to amplitude-modulated pure tones have been used to scrutinize binaural interaction by frequency-tagging the two ears' inputs at different frequencies. Unlike pure tones, speech and music are physically very complex, as they include many frequency components, pauses, and large temporal variations. To examine the utility of magnetoencephalographic (MEG) steady-state fields (SSFs) in the study of early cortical processing of complex natural sounds, the authors tested the extent to which amplitude-modulated speech and music can elicit reliable SSFs. MEG responses were recorded to 90-s-long binaural tones, speech, and music, amplitude-modulated at 41.1 Hz at four different depths (25, 50, 75, and 100%). The subjects were 11 healthy, normal-hearing adults. MEG signals were averaged in phase with the modulation frequency, and the sources of the resulting SSFs were modeled by current dipoles. After the MEG recording, intelligibility of the speech, musical quality of the music stimuli, naturalness of music and speech stimuli, and the perceived deterioration caused by the modulation were evaluated on visual analog scales. The perceived quality of the stimuli decreased as a function of increasing modulation depth, more strongly for music than speech; yet, all subjects considered the speech intelligible even at the 100% modulation. SSFs were the strongest to tones and the weakest to speech stimuli; the amplitudes increased with increasing modulation depth for all stimuli. SSFs to tones were reliably detectable at all modulation depths (in all subjects in the right hemisphere, in 9 subjects in the left hemisphere) and to music stimuli at 50 to 100% depths, whereas speech usually elicited clear SSFs only at 100% depth.The hemispheric balance of SSFs was toward the right hemisphere for tones and speech, whereas SSFs to music showed no lateralization. In addition, the right lateralization of SSFs to the speech stimuli decreased with decreasing modulation depth. The results showed that SSFs can be reliably measured to amplitude-modulated natural sounds, with slightly different hemispheric lateralization for different carrier sounds. With speech stimuli, modulation at 100% depth is required, whereas for music the 75% or even 50% modulation depths provide a reasonable compromise between the signal-to-noise ratio of SSFs and sound quality or perceptual requirements. SSF recordings thus seem feasible for assessing the early cortical processing of natural sounds.
Wang, M D; Reed, C M; Bilger, R C
1978-03-01
It has been found that listeners with sensorineural hearing loss who show similar patterns of consonant confusions also tend to have similar audiometric profiles. The present study determined whether normal listeners, presented with filtered speech, would produce consonant confusions similar to those previously reported for the hearing-impaired listener. Consonant confusion matrices were obtained from eight normal-hearing subjects for four sets of CV and VC nonsense syllables presented under six high-pass and six-low pass filtering conditions. Patterns of consonant confusion for each condition were described using phonological features in sequential information analysis. Severe low-pass filtering produced consonant confusions comparable to those of listeners with high-frequency hearing loss. Severe high-pass filtering gave a result comparable to that of patients with flat or rising audiograms. And, mild filtering resulted in confusion patterns comparable to those of listeners with essentially normal hearing. An explanation in terms of the spectrum, the level of speech, and the configuration of this individual listener's audiogram is given.
Zhu, Shufeng; Wong, Lena L N; Wang, Bin; Chen, Fei
2017-07-12
The aim of the present study was to evaluate the influence of lexical tone contour and age on sentence perception in quiet and in noise conditions in Mandarin-speaking children ages 7 to 11 years with normal hearing. Test materials were synthesized Mandarin sentences, each word with a manipulated lexical contour, that is, normal contour, flat contour, or a tone contour randomly selected from the four Mandarin lexical tone contours. A convenience sample of 75 Mandarin-speaking participants with normal hearing, ages 7, 9, and 11 years (25 participants in each age group), was selected. Participants were asked to repeat the synthesized speech in quiet and in speech spectrum-shaped noise at 0 dB signal-to-noise ratio. In quiet, sentence recognition by the 11-year-old children was similar to that of adults, and misrepresented lexical tone contours did not have a detrimental effect. However, the performance of children ages 9 and 7 years was significantly poorer. The performance of all three age groups, especially the younger children, declined significantly in noise. The present research suggests that lexical tone contour plays an important role in Mandarin sentence recognition, and misrepresented tone contours result in greater difficulty in sentence recognition in younger children. These results imply that maturation and/or language use experience play a role in the processing of tone contours for Mandarin speech understanding, particularly in noise.
Kramer, Sophia E; Teunissen, Charlotte E; Zekveld, Adriana A
2016-01-01
Pupillometry is one method that has been used to measure processing load expended during speech understanding. Notably, speech perception (in noise) tasks can evoke a pupil response. It is not known if there is concurrent activation of the sympathetic nervous system as indexed by salivary cortisol and chromogranin A (CgA) and whether such activation differs between normally hearing (NH) and hard-of-hearing (HH) adults. Ten NH and 10 adults with mild-to-moderate hearing loss (mean age 52 years) participated. Two speech perception tests were administered in random order: one in quiet targeting 100% correct performance and one in noise targeting 50% correct performance. Pupil responses and salivary samples for cortisol and CgA analyses were collected four times: before testing, after the two speech perception tests, and at the end of the session. Participants rated their perceived accuracy, effort, and motivation. Effects were examined using repeated-measures analyses of variance. Correlations between outcomes were calculated. HH listeners had smaller peak pupil dilations (PPDs) than NH listeners in the speech in noise condition only. No group or condition effects were observed for the cortisol data, but HH listeners tended to have higher cortisol levels across conditions. CgA levels were larger at the pretesting time than at the three other test times. Hearing impairment did not affect CgA. Self-rated motivation correlated most often with cortisol or PPD values. The three physiological indicators of cognitive load and stress (PPD, cortisol, and CgA) are not equally affected by speech testing or hearing impairment. Each of them seem to capture a different dimension of sympathetic nervous system activity.
Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging
Henschke, Julia U.; Ohl, Frank W.; Budinger, Eike
2018-01-01
During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals. PMID:29551970
Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging.
Henschke, Julia U; Ohl, Frank W; Budinger, Eike
2018-01-01
During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals.
Increased medial olivocochlear reflex strength in normal-hearing, noise-exposed humans
2017-01-01
Research suggests that college-aged adults are vulnerable to tinnitus and hearing loss due to exposure to traumatic levels of noise on a regular basis. Recent human studies have associated exposure to high noise exposure background (NEB, i.e., routine noise exposure) with the reduced cochlear output and impaired speech processing ability in subjects with clinically normal hearing sensitivity. While the relationship between NEB and the functions of the auditory afferent neurons are studied in the literature, little is known about the effects of NEB on functioning of the auditory efferent system. The objective of the present study was to investigate the relationship between medial olivocochlear reflex (MOCR) strength and NEB in subjects with clinically normal hearing sensitivity. It was hypothesized that subjects with high NEB would exhibit reduced afferent input to the MOCR circuit which would subsequently lead to reduced strength of the MOCR. In normal-hearing listeners, the study examined (1) the association between NEB and baseline click-evoked otoacoustic emissions (CEOAEs) and (2) the association between NEB and MOCR strength. The MOCR was measured using CEOAEs evoked by 60 dB pSPL linear clicks in a contralateral acoustic stimulation (CAS)-off and CAS-on (a broadband noise at 60 dB SPL) condition. Participants with at least 6 dB signal-to-noise ratio (SNR) in the CAS-off and CAS-on conditions were included for analysis. A normalized CEOAE inhibition index was calculated to express MOCR strength in a percentage value. NEB was estimated using a validated questionnaire. The results showed that NEB was not associated with the baseline CEOAE amplitude (r = -0.112, p = 0.586). Contrary to the hypothesis, MOCR strength was positively correlated with NEB (r = 0.557, p = 0.003). NEB remained a significant predictor of MOCR strength (β = 2.98, t(19) = 3.474, p = 0.003) after the unstandardized coefficient was adjusted to control for effects of smoking, sound level tolerance (SLT) and tinnitus. These data provide evidence that MOCR strength is associated with NEB. The functional significance of increased MOCR strength is discussed. PMID:28886123
Li, Qiang; Xia, Shuang; Zhao, Fei; Qi, Ji
2014-01-01
The purpose of this study was to assess functional changes in the cerebral cortex in people with different sign language experience and hearing status whilst observing and imitating Chinese Sign Language (CSL) using functional magnetic resonance imaging (fMRI). 50 participants took part in the study, and were divided into four groups according to their hearing status and experience of using sign language: prelingual deafness signer group (PDS), normal hearing non-signer group (HnS), native signer group with normal hearing (HNS), and acquired signer group with normal hearing (HLS). fMRI images were scanned from all subjects when they performed block-designed tasks that involved observing and imitating sign language stimuli. Nine activation areas were found in response to undertaking either observation or imitation CSL tasks and three activated areas were found only when undertaking the imitation task. Of those, the PDS group had significantly greater activation areas in terms of the cluster size of the activated voxels in the bilateral superior parietal lobule, cuneate lobe and lingual gyrus in response to undertaking either the observation or the imitation CSL task than the HnS, HNS and HLS groups. The PDS group also showed significantly greater activation in the bilateral inferior frontal gyrus which was also found in the HNS or the HLS groups but not in the HnS group. This indicates that deaf signers have better sign language proficiency, because they engage more actively with the phonetic and semantic elements. In addition, the activations of the bilateral superior temporal gyrus and inferior parietal lobule were only found in the PDS group and HNS group, and not in the other two groups, which indicates that the area for sign language processing appears to be sensitive to the age of language acquisition. After reading this article, readers will be able to: discuss the relationship between sign language and its neural mechanisms. Copyright © 2014 Elsevier Inc. All rights reserved.
Labudda, Kirsten; Brand, Matthias; Mertens, Markus; Ebner, Alois; Markowitsch, Hans J; Woermann, Friedrich G
2010-02-01
We investigated the impact of a congenital prefrontal lesion and its resection on decision making under risk and under ambiguity in a patient with right mediofrontal cortical dysplasia. Both kinds of decision making are normally associated with the medial prefrontal cortex. We additionally studied pre- and postsurgical fMRI activations when processing information relevant for risky decision making. Results indicate selective impairments of ambiguous decision making pre- and postsurgically. Decision making under risk was intact. In contrast to healthy subjects the patient exhibited no activation within the dysplastic anterior cingulate cortex but left-sided orbitofrontal activation on the fMRI task suggesting early reorganization processes.
Visual attention and flexible normalization pools
Schwartz, Odelia; Coen-Cagli, Ruben
2013-01-01
Attention to a spatial location or feature in a visual scene can modulate the responses of cortical neurons and affect perceptual biases in illusions. We add attention to a cortical model of spatial context based on a well-founded account of natural scene statistics. The cortical model amounts to a generalized form of divisive normalization, in which the surround is in the normalization pool of the center target only if they are considered statistically dependent. Here we propose that attention influences this computation by accentuating the neural unit activations at the attended location, and that the amount of attentional influence of the surround on the center thus depends on whether center and surround are deemed in the same normalization pool. The resulting form of model extends a recent divisive normalization model of attention (Reynolds & Heeger, 2009). We simulate cortical surround orientation experiments with attention and show that the flexible model is suitable for capturing additional data and makes nontrivial testable predictions. PMID:23345413
Tinnitus in normally hearing patients: clinical aspects and repercussions.
Sanchez, Tanit Ganz; Medeiros, Italo Roberto Torres de; Levy, Cristiane Passos Dias; Ramalho, Jeanne da Rosa Oiticica; Bento, Ricardo Ferreira
2005-01-01
Patients with tinnitus and normal hearing constitute an important group, given that findings do not suffer influence of the hearing loss. However, this group is rarely studied, so we do not know whether its clinical characteristics and interference in daily life are the same of those of the patients with tinnitus and hearing loss. To compare tinnitus characteristics and interference in daily life among patients with and without hearing loss. Historic cohort. Among 744 tinnitus patients seen at a Tinnitus Clinic, 55 with normal audiometry were retrospectively evaluated. The control group consisted of 198 patients with tinnitus and hearing loss, following the same protocol. We analyzed the patients' data as well as the tinnitus characteristics and interference in daily life. The mean age of the studied group (43.1 +/- 13.4 years) was significantly lower than that of the control group (49.9 +/- 14.5 years). In both groups, tinnitus was predominant in women, bilateral, single tone and constant, but there were no differences between both groups. The interference in concentration and emotional status (25.5% and 36.4%) was significantly lower in the studied group than that of the control group (46% and 61.6%), but it did not happen in regard to interference over sleep and social life. Patients with tinnitus and normal hearing showed similar characteristics when compared to those with hearing loss. However, the age of the patients and the interference over concentration and emotional status were significantly lower in this group.
Reed, Amanda C.; Centanni, Tracy M.; Borland, Michael S.; Matney, Chanel J.; Engineer, Crystal T.; Kilgard, Michael P.
2015-01-01
Objectives Hearing loss is a commonly experienced disability in a variety of populations including veterans and the elderly and can often cause significant impairment in the ability to understand spoken language. In this study, we tested the hypothesis that neural and behavioral responses to speech will be differentially impaired in an animal model after two forms of hearing loss. Design Sixteen female Sprague–Dawley rats were exposed to one of two types of broadband noise which was either moderate or intense. In nine of these rats, auditory cortex recordings were taken 4 weeks after noise exposure (NE). The other seven were pretrained on a speech sound discrimination task prior to NE and were then tested on the same task after hearing loss. Results Following intense NE, rats had few neural responses to speech stimuli. These rats were able to detect speech sounds but were no longer able to discriminate between speech sounds. Following moderate NE, rats had reorganized cortical maps and altered neural responses to speech stimuli but were still able to accurately discriminate between similar speech sounds during behavioral testing. Conclusions These results suggest that rats are able to adjust to the neural changes after moderate NE and discriminate speech sounds, but they are not able to recover behavioral abilities after intense NE. Animal models could help clarify the adaptive and pathological neural changes that contribute to speech processing in hearing-impaired populations and could be used to test potential behavioral and pharmacological therapies. PMID:25072238
McArdle, Rachel; Wilson, Richard H
2008-06-01
To analyze the 50% correct recognition data that were from the Wilson et al (this issue) study and that were obtained from 24 listeners with normal hearing; also to examine whether acoustic, phonetic, or lexical variables can predict recognition performance for monosyllabic words presented in speech-spectrum noise. The specific variables are as follows: (a) acoustic variables (i.e., effective root-mean-square sound pressure level, duration), (b) phonetic variables (i.e., consonant features such as manner, place, and voicing for initial and final phonemes; vowel phonemes), and (c) lexical variables (i.e., word frequency, word familiarity, neighborhood density, neighborhood frequency). The descriptive, correlational study will examine the influence of acoustic, phonetic, and lexical variables on speech recognition in noise performance. Regression analysis demonstrated that 45% of the variance in the 50% point was accounted for by acoustic and phonetic variables whereas only 3% of the variance was accounted for by lexical variables. These findings suggest that monosyllabic word-recognition-in-noise is more dependent on bottom-up processing than on top-down processing. The results suggest that when speech-in-noise testing is used in a pre- and post-hearing-aid-fitting format, the use of monosyllabic words may be sensitive to changes in audibility resulting from amplification.
Consonant-recognition patterns and self-assessment of hearing handicap.
Hustedde, C G; Wiley, T L
1991-12-01
Two companion experiments were conducted with normal-hearing subjects and subjects with high-frequency, sensorineural hearing loss. In Experiment 1, the validity of a self-assessment device of hearing handicap was evaluated in two groups of hearing-impaired listeners with significantly different consonant-recognition ability. Data for the Hearing Performance Inventory--Revised (Lamb, Owens, & Schubert, 1983) did not reveal differences in self-perceived handicap for the two groups of hearing-impaired listeners; it was sensitive to perceived differences in hearing abilities for listeners who did and did not have a hearing loss. Experiment 2 was aimed at evaluation of consonant error patterns that accounted for observed group differences in consonant-recognition ability. Error patterns on the Nonsense-Syllable Test (NST) across the two subject groups differed in both degree and type of error. Listeners in the group with poorer NST performance always demonstrated greater difficulty with selected low-frequency and high-frequency syllables than did listeners in the group with better NST performance. Overall, the NST was sensitive to differences in consonant-recognition ability for normal-hearing and hearing-impaired listeners.
Sheffield, Benjamin; Brungart, Douglas; Tufts, Jennifer; Ness, James
2017-01-01
To examine the relationship between hearing acuity and operational performance in simulated dismounted combat. Individuals wearing hearing loss simulation systems competed in a paintball-based exercise where the objective was to be the last player remaining. Four hearing loss profiles were tested in each round (no hearing loss, mild, moderate and severe) and four rounds were played to make up a match. This allowed counterbalancing of simulated hearing loss across participants. Forty-three participants across two data collection sites (Fort Detrick, Maryland and the United States Military Academy, New York). All participants self-reported normal hearing except for two who reported mild hearing loss. Impaired hearing had a greater impact on the offensive capabilities of participants than it did on their "survival", likely due to the tendency for individuals with simulated impairment to adopt a more conservative behavioural strategy than those with normal hearing. These preliminary results provide valuable insights into the impact of impaired hearing on combat effectiveness, with implications for the development of improved auditory fitness-for-duty standards, the establishment of performance requirements for hearing protection technologies, and the refinement of strategies to train military personnel on how to use hearing protection in combat environments.
Altered Cortical Swallowing Processing in Patients with Functional Dysphagia: A Preliminary Study
Wollbrink, Andreas; Warnecke, Tobias; Winkels, Martin; Pantev, Christo; Dziewas, Rainer
2014-01-01
Objective Current neuroimaging research on functional disturbances provides growing evidence for objective neuronal correlates of allegedly psychogenic symptoms, thereby shifting the disease concept from a psychological towards a neurobiological model. Functional dysphagia is such a rare condition, whose pathogenetic mechanism is largely unknown. In the absence of any organic reason for a patient's persistent swallowing complaints, sensorimotor processing abnormalities involving central neural pathways constitute a potential etiology. Methods In this pilot study we measured cortical swallow-related activation in 5 patients diagnosed with functional dysphagia and a matched group of healthy subjects applying magnetoencephalography. Source localization of cortical activation was done with synthetic aperture magnetometry. To test for significant differences in cortical swallowing processing between groups, a non-parametric permutation test was afterwards performed on individual source localization maps. Results Swallowing task performance was comparable between groups. In relation to control subjects, in whom activation was symmetrically distributed in rostro-medial parts of the sensorimotor cortices of both hemispheres, patients showed prominent activation of the right insula, dorsolateral prefrontal cortex and lateral premotor, motor as well as inferolateral parietal cortex. Furthermore, activation was markedly reduced in the left medial primary sensory cortex as well as right medial sensorimotor cortex and adjacent supplementary motor area (p<0.01). Conclusions Functional dysphagia - a condition with assumed normal brain function - seems to be associated with distinctive changes of the swallow-related cortical activation pattern. Alterations may reflect exaggerated activation of a widely distributed vigilance, self-monitoring and salience rating network that interferes with down-stream deglutition sensorimotor control. PMID:24586948
Keshavarzi, Mahmoud; Goehring, Tobias; Zakis, Justin; Turner, Richard E.; Moore, Brian C. J.
2018-01-01
Despite great advances in hearing-aid technology, users still experience problems with noise in windy environments. The potential benefits of using a deep recurrent neural network (RNN) for reducing wind noise were assessed. The RNN was trained using recordings of the output of the two microphones of a behind-the-ear hearing aid in response to male and female speech at various azimuths in the presence of noise produced by wind from various azimuths with a velocity of 3 m/s, using the “clean” speech as a reference. A paired-comparison procedure was used to compare all possible combinations of three conditions for subjective intelligibility and for sound quality or comfort. The conditions were unprocessed noisy speech, noisy speech processed using the RNN, and noisy speech that was high-pass filtered (which also reduced wind noise). Eighteen native English-speaking participants were tested, nine with normal hearing and nine with mild-to-moderate hearing impairment. Frequency-dependent linear amplification was provided for the latter. Processing using the RNN was significantly preferred over no processing by both subject groups for both subjective intelligibility and sound quality, although the magnitude of the preferences was small. High-pass filtering (HPF) was not significantly preferred over no processing. Although RNN was significantly preferred over HPF only for sound quality for the hearing-impaired participants, for the results as a whole, there was a preference for RNN over HPF. Overall, the results suggest that reduction of wind noise using an RNN is possible and might have beneficial effects when used in hearing aids. PMID:29708061
Bateman, G A
2003-02-01
Superficial cortical venous compression secondary to alterations in craniospinal compliance is implicated in the pathogenesis of normal pressure hydrocephalus (NPH). A reduction in the pulsation in the outflow of the cortical veins would be expected to occur following compression of these veins and this has been shown in NPH. If cortical vein compression is a causative factor in NPH, it would be expected that cortical vein compliance as measured by pulsatility would be significantly altered by a curative procedure i.e. shunt tube insertion. My purpose is to compare the blood flow pulsatility characteristics in a group of patients with NPH before and after shunt tube insertion. I initially studied 18 subjects without pathology with MRI flow quantification studies of the cerebral arteries and veins to define the range of normality. The main study involved 18 patients with idiopathic dementia and mild leukoaraiosis who served as controls and seven patients with NPH studied before and after shunt insertion. Arterial, superior sagittal and straight sinus pulsatility was not significantly different between the patients with idiopathic dementia and those NPH patients before or after shunting. Cortical vein pulsatility before shunting in the patients with NPH was 43% lower than in those with idiopathic dementia ( P=0.006). Following shunting, cortical vein pulsatility increased by 186% ( P=0.007). There is thus reduced compliance in cortical veins in NPH which is significantly increased in patients who respond to insertion of a shunt tube. These findings suggest that reversible elevation in cortical vein pressure and reversal of the normal absorption pathway for cerebrospinal fluid may be behind the pathophysiology of NPH.
The Envoy® Totally Implantable Hearing System, St. Croix Medical
Kroll, Kai; Grant, Iain L.; Javel, Eric
2002-01-01
The Totally Implantable Envoy® System is currently undergoing clinical trials in both the United States and Europe. The fully implantable hearing device is intended for use in patients with sensorineural hearing loss. The device employs piezoelectric transducers to sense ossicle motion and drive the stapes. Programmable signal processing parameters include amplification, compression, and variable frequency response. The fully implantable attribute allows users to take advantage of normal external ear resonances and head-related transfer functions, while avoiding undesirable earmold effects. The high sensitivity, low power consumption, and high fidelity attributes of piezoelectric transducers minimize acoustic feedback and maximize battery life (Gyo, 1996; Yanagihara, (1987) and 2001). The surgical procedure to install the device has been accurately defined and implantation is reversible. PMID:25425915
Uhler, Kristin M; Hunter, Sharon K; Tierney, Elyse; Gilley, Phillip M
2018-06-01
To examine the utility of the mismatch response (MMR) and acoustic change complex (ACC) for assessing speech discrimination in infants. Continuous EEG was recorded during sleep from 48 (24 male, 20 female) normally hearing aged 1.77 to -4.57 months in response to two auditory discrimination tasks. ACC was recorded in response to a three-vowel sequence (/i/-/a/-/i/). MMR was recorded in response to a standard vowel, /a/, (probability 85%), and to a deviant vowel, /i/, (probability of 15%). A priori comparisons included: age, sex, and sleep state. These were conducted separately for each of the three bandpass filter settings were compared (1-18, 1-30, and 1-40 Hz). A priori tests revealed no differences in MMR or ACC for age, sex, or sleep state for any of the three filter settings. ACC and MMR responses were prominently observed in all 44 sleeping infants (data from four infants were excluded). Significant differences observed for ACC were to the onset and offset of stimuli. However, neither group nor individual differences were observed to changes in speech stimuli in the ACC. MMR revealed two prominent peaks occurring at the stimulus onset and at the stimulus offset. Permutation t-tests revealed significant differences between the standard and deviant stimuli for both the onset and offset MMR peaks (p < 0.01). The 1-18 Hz filter setting revealed significant differences for all participants in the MMR paradigm. Both ACC and MMR responses were observed to auditory stimulation suggesting that infants perceive and process speech information even during sleep. Significant differences between the standard and deviant responses were observed in the MMR, but not ACC paradigm. These findings suggest that the MMR is sensitive to detecting auditory/speech discrimination processing. This paper identified that MMR can be used to identify discrimination in normal hearing infants. This suggests that MMR has potential for use in infants with hearing loss to validate hearing aid fittings. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Language learning impairments: integrating basic science, technology, and remediation.
Tallal, P; Merzenich, M M; Miller, S; Jenkins, W
1998-11-01
One of the fundamental goals of the modern field of neuroscience is to understand how neuronal activity gives rise to higher cortical function. However, to bridge the gap between neurobiology and behavior, we must understand higher cortical functions at the behavioral level at least as well as we have come to understand neurobiological processes at the cellular and molecular levels. This is certainly the case in the study of speech processing, where critical studies of behavioral dysfunction have provided key insights into the basic neurobiological mechanisms relevant to speech perception and production. Much of this progress derives from a detailed analysis of the sensory, perceptual, cognitive, and motor abilities of children who fail to acquire speech, language, and reading skills normally within the context of otherwise normal development. Current research now shows that a dysfunction in normal phonological processing, which is critical to the development of oral and written language, may derive, at least in part, from difficulties in perceiving and producing basic sensory-motor information in rapid succession--within tens of ms (see Tallal et al. 1993a for a review). There is now substantial evidence supporting the hypothesis that basic temporal integration processes play a fundamental role in establishing neural representations for the units of speech (phonemes), which must be segmented from the (continuous) speech stream and combined to form words, in order for the normal development of oral and written language to proceed. Results from magnetic resonance imaging (MRI) and positron emission tomography (PET) studies, as well as studies of behavioral performance in normal and language impaired children and adults, will be reviewed to support the view that the integration of rapidly changing successive acoustic events plays a primary role in phonological development and disorders. Finally, remediation studies based on this research, coupled with neuroplasticity research, will be presented.
Deaf individuals who work with computers present a high level of visual attention.
Ribeiro, Paula Vieira; Ribas, Valdenilson Ribeiro; Ribas, Renata de Melo Guerra; de Melo, Teresinha de Jesus Oliveira Guimarães; Marinho, Carlos Antonio de Sá; Silva, Kátia Karina do Monte; de Albuquerque, Elizabete Elias; Ribas, Valéria Ribeiro; de Lima, Renata Mirelly Silva; Santos, Tuthcha Sandrelle Botelho Tavares
2011-01-01
Some studies in the literature indicate that deaf individuals seem to develop a higher level of attention and concentration during the process of constructing of different ways of communicating. The aim of this study was to evaluate the level of attention in individuals deaf from birth that worked with computers. A total of 161 individuals in the 18-25 age group were assessed. Of these, 40 were congenitally deaf individuals that worked with computers, 42 were deaf individuals that did not work, did not know how to use nor used computers (Control 1), 39 individuals with normal hearing that did not work, did not know how to use computers nor used them (Control 2), and 40 individuals with normal hearing that worked with computers (Control 3). The group of subjects deaf from birth that worked with computers (IDWC) presented a higher level of focused attention, sustained attention, mental manipulation capacity and resistance to interference compared to the control groups. This study highlights the relevance sensory to cognitive processing.
AuBuchon, Angela M.; Pisoni, David B.; Kronenberger, William G.
2015-01-01
OBJECTIVES Determine if early-implanted, long-term cochlear implant (CI) users display delays in verbal short-term and working memory capacity when processes related to audibility and speech production are eliminated. DESIGN Twenty-three long-term CI users and 23 normal-hearing controls each completed forward and backward digit span tasks under testing conditions which differed in presentation modality (auditory or visual) and response output (spoken recall or manual pointing). RESULTS Normal-hearing controls reproduced more lists of digits than the CI users, even when the test items were presented visually and the responses were made manually via touchscreen response. CONCLUSIONS Short-term and working memory delays observed in CI users are not due to greater demands from peripheral sensory processes such as audibility or from overt speech-motor planning and response output organization. Instead, CI users are less efficient at encoding and maintaining phonological representations in verbal short-term memory utilizing phonological and linguistic strategies during memory tasks. PMID:26496666
AuBuchon, Angela M; Pisoni, David B; Kronenberger, William G
2015-01-01
To determine whether early-implanted, long-term cochlear implant (CI) users display delays in verbal short-term and working memory capacity when processes related to audibility and speech production are eliminated. Twenty-three long-term CI users and 23 normal-hearing controls each completed forward and backward digit span tasks under testing conditions that differed in presentation modality (auditory or visual) and response output (spoken recall or manual pointing). Normal-hearing controls reproduced more lists of digits than the CI users, even when the test items were presented visually and the responses were made manually via touchscreen response. Short-term and working memory delays observed in CI users are not due to greater demands from peripheral sensory processes such as audibility or from overt speech-motor planning and response output organization. Instead, CI users are less efficient at encoding and maintaining phonological representations in verbal short-term memory using phonological and linguistic strategies during memory tasks.
Wu, Dan; Chen, Jian-yong; Wang, Shuo; Zhang, Man-hua; Chen, Jing; Li, Yu-ling; Zhang, Hua
2013-03-01
To evaluate the relationship between the Mandarin acceptable noise level (ANL) and the personality trait for normal-hearing adults. Eighty-five Mandarin speakers, aged from 21 to 27, participated in this study. ANL materials and the Eysenck Personality Questionnaire (EPQ) questionnaire were used to test the acceptable noise level and the personality trait for normal-hearing subjects. SPSS 17.0 was used to analyze the results. ANL were (7.8 ± 2.9) dB in normal hearing participants. The P and N scores in EPQ were significantly correlated with ANL (r = 0.284 and 0.318, P < 0.01). No significant correlations were found between ANL and E and L scores (r = -0.036 and -.167, P > 0.05). Listeners with higher ANL were more likely to be eccentric, hostile, aggressive, and instabe, no ANL differences were found in listeners who were different in introvert-extravert or lying.
Childhood Otitis Media: A Cohort Study With 30-Year Follow-Up of Hearing (The HUNT Study).
Aarhus, Lisa; Tambs, Kristian; Kvestad, Ellen; Engdahl, Bo
2015-01-01
To study the extent to which otitis media (OM) in childhood is associated with adult hearing thresholds. Furthermore, to study whether the effects of OM on adult hearing thresholds are moderated by age or noise exposure. Population-based cohort study of 32,786 participants who had their hearing tested by pure-tone audiometry in primary school and again at ages ranging from 20 to 56 years. Three thousand sixty-six children were diagnosed with hearing loss; the remaining sample had normal childhood hearing. Compared with participants with normal childhood hearing, those diagnosed with childhood hearing loss caused by otitis media with effusion (n = 1255), chronic suppurative otitis media (CSOM; n = 108), or hearing loss after recurrent acute otitis media (rAOM; n = 613) had significantly increased adult hearing thresholds in the whole frequency range (2 dB/17-20 dB/7-10 dB, respectively). The effects were adjusted for age, sex, and noise exposure. Children diagnosed with hearing loss after rAOM had somewhat improved hearing thresholds as adults. The effects of CSOM and hearing loss after rAOM on adult hearing thresholds were larger in participants tested in middle adulthood (ages 40 to 56 years) than in those tested in young adulthood (ages 20 to 40 years). Eardrum pathology added a marginally increased risk of adult hearing loss (1-3 dB) in children with otitis media with effusion or hearing loss after rAOM. The study could not reveal significant differences in the effect of self-reported noise exposure on adult hearing thresholds between the groups with OM and the group with normal childhood hearing. This cohort study indicates that CSOM and rAOM in childhood are associated with adult hearing loss, underlining the importance of optimal treatment in these conditions. It appears that ears with a subsequent hearing loss after OM in childhood age at a faster rate than those without; however this should be confirmed by studies with several follow-up tests through adulthood.
Independent Deficits of Visual Word and Motion Processing in Aging and Early Alzheimer's Disease
Velarde, Carla; Perelstein, Elizabeth; Ressmann, Wendy; Duffy, Charles J.
2013-01-01
We tested whether visual processing impairments in aging and Alzheimer's disease (AD) reflect uniform posterior cortical decline, or independent disorders of visual processing for reading and navigation. Young and older normal controls were compared to early AD patients using psychophysical measures of visual word and motion processing. We find elevated perceptual thresholds for letters and word discrimination from young normal controls, to older normal controls, to early AD patients. Across subject groups, visual motion processing showed a similar pattern of increasing thresholds, with the greatest impact on radial pattern motion perception. Combined analyses show that letter, word, and motion processing impairments are independent of each other. Aging and AD may be accompanied by independent impairments of visual processing for reading and navigation. This suggests separate underlying disorders and highlights the need for comprehensive evaluations to detect early deficits. PMID:22647256
Song, Jae-Jin; Vanneste, Sven; Lazard, Diane S; Van de Heyning, Paul; Park, Joo Hyun; Oh, Seung Ha; De Ridder, Dirk
2015-05-01
Previous positron emission tomography (PET) studies have shown that various cortical areas are activated to process speech signal in cochlear implant (CI) users. Nonetheless, differences in task dimension among studies and low statistical power preclude from understanding sound processing mechanism in CI users. Hence, we performed activation likelihood estimation meta-analysis of PET studies in CI users and normal hearing (NH) controls to compare the two groups. Eight studies (58 CI subjects/92 peak coordinates; 45 NH subjects/40 peak coordinates) were included and analyzed, retrieving areas significantly activated by lexical and nonlexical stimuli. For lexical and nonlexical stimuli, both groups showed activations in the components of the dual-stream model such as bilateral superior temporal gyrus/sulcus, middle temporal gyrus, left posterior inferior frontal gyrus, and left insula. However, CI users displayed additional unique activation patterns by lexical and nonlexical stimuli. That is, for the lexical stimuli, significant activations were observed in areas comprising salience network (SN), also known as the intrinsic alertness network, such as the left dorsal anterior cingulate cortex (dACC), left insula, and right supplementary motor area in the CI user group. Also, for the nonlexical stimuli, CI users activated areas comprising SN such as the right insula and left dACC. Previous episodic observations on lexical stimuli processing using the dual auditory stream in CI users were reconfirmed in this study. However, this study also suggests that dual-stream auditory processing in CI users may need supports from the SN. In other words, CI users need to pay extra attention to cope with degraded auditory signal provided by the implant. © 2015 Wiley Periodicals, Inc.
Vowel perception by noise masked normal-hearing young adults
NASA Astrophysics Data System (ADS)
Richie, Carolyn; Kewley-Port, Diane; Coughlin, Maureen
2005-08-01
This study examined vowel perception by young normal-hearing (YNH) adults, in various listening conditions designed to simulate mild-to-moderate sloping sensorineural hearing loss. YNH listeners were individually age- and gender-matched to young hearing-impaired (YHI) listeners tested in a previous study [Richie et al., J. Acoust. Soc. Am. 114, 2923-2933 (2003)]. YNH listeners were tested in three conditions designed to create equal audibility with the YHI listeners; a low signal level with and without a simulated hearing loss, and a high signal level with a simulated hearing loss. Listeners discriminated changes in synthetic vowel tokens /smcapi e ɛ invv æ/ when F1 or F2 varied in frequency. Comparison of YNH with YHI results failed to reveal significant differences between groups in terms of performance on vowel discrimination, in conditions of similar audibility by using both noise masking to elevate the hearing thresholds of the YNH and applying frequency-specific gain to the YHI listeners. Further, analysis of learning curves suggests that while the YHI listeners completed an average of 46% more test blocks than YNH listeners, the YHI achieved a level of discrimination similar to that of the YNH within the same number of blocks. Apparently, when age and gender are closely matched between young hearing-impaired and normal-hearing adults, performance on vowel tasks may be explained by audibility alone.
Infant vocalizations and the early diagnosis of severe hearing impairment.
Eilers, R E; Oller, D K
1994-02-01
To determine whether late onset of canonical babbling could be used as a criterion to determine risk of hearing impairment, we obtained vocalization samples longitudinally from 94 infants with normal hearing and 37 infants with severe to profound hearing impairment. Parents were instructed to report the onset of canonical babbling (the production of well-formed syllables such as "da," "na," "bee," "yaya"). Verification that the infants were producing canonical syllables was collected in laboratory audio recordings. Infants with normal hearing produced canonical vocalizations before 11 months of age (range, 3 to 10 months; mode, 7 months); infants who were deaf failed to produce canonical syllables until 11 months of age or older, often well into the third year of life (range, 11 to 49 months; mode, 24 months). The correlation between age at onset of the canonical stage and age at auditory amplification was 0.68, indicating that early identification and fitting of hearing aids is of significant benefit to infants learning language. The fact that there is no overlap in the distribution of the onset of canonical babbling between infants with normal hearing and infants with hearing impairment means that the failure of otherwise healthy infants to produce canonical syllables before 11 months of age should be considered a serious risk factor for hearing impairment and, when observed, should result in immediate referral for audiologic evaluation.
Binaural hearing with electrical stimulation.
Kan, Alan; Litovsky, Ruth Y
2015-04-01
Bilateral cochlear implantation is becoming a standard of care in many clinics. While much benefit has been shown through bilateral implantation, patients who have bilateral cochlear implants (CIs) still do not perform as well as normal hearing listeners in sound localization and understanding speech in noisy environments. This difference in performance can arise from a number of different factors, including the areas of hardware and engineering, surgical precision and pathology of the auditory system in deaf persons. While surgical precision and individual pathology are factors that are beyond careful control, improvements can be made in the areas of clinical practice and the engineering of binaural speech processors. These improvements should be grounded in a good understanding of the sensitivities of bilateral CI patients to the acoustic binaural cues that are important to normal hearing listeners for sound localization and speech in noise understanding. To this end, we review the current state-of-the-art in the understanding of the sensitivities of bilateral CI patients to binaural cues in electric hearing, and highlight the important issues and challenges as they relate to clinical practice and the development of new binaural processing strategies. This article is part of a Special Issue entitled
NASA Astrophysics Data System (ADS)
Narendran, Mini M.; Humes, Larry E.
2003-04-01
Increasing the rate of presentation can have a deleterious effect on auditory processing, especially among the elderly. Rate can be manipulated by changing the duration of individual components of a sequence of sounds, by changing the inter-stimulus interval (ISI) between components, or both. Consequently, when age-related deficits in performance appear to be attributable to rate of stimulus presentation, it is often the case that alternative explanations in terms of the effects of stimulus duration or ISI are also possible. In this study, the independent effects of duration and ISI on the discrimination of temporal order for four-tone sequences were investigated in a group of young normal-hearing and elderly hearing-impaired listeners. It was found that discrimination performance was driven by the rate of presentation, rather than stimulus duration or ISI alone, for both groups of listeners. The performance of the two groups of listeners differed significantly for the fastest presentation rates, but was similar for the slower rates. Slowing the rate of presentation seemed to improve performance, regardless of whether this was done by increasing stimulus duration or increasing ISI, and this was observed for both groups of listeners. [Work supported, in part, by NIA.
Masking Release in Children and Adults With Hearing Loss When Using Amplification
McCreery, Ryan; Kopun, Judy; Lewis, Dawna; Alexander, Joshua; Stelmachowicz, Patricia
2016-01-01
Purpose This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared. Method Sentence recognition in unmodulated noise was compared with recognition in modulated noise (masking release). Recognition was measured for participants with hearing loss using individualized amplification via the hearing-aid simulator. Results Adults with hearing loss showed greater masking release than the children with hearing loss. Average masking release was small (1 dB) and did not depend on hearing status. Masking release was comparable for slow and fast compression. Conclusions The use of amplification in this study contrasts with previous studies that did not use amplification. The results suggest that when differences in audibility are reduced, participants with hearing loss may be able to take advantage of dips in the noise levels, similar to participants with normal hearing. Although children required a more favorable signal-to-noise ratio than adults for both unmodulated and modulated noise, masking release was not statistically different. However, the ability to detect a difference may have been limited by the small amount of masking release observed. PMID:26540194
Static and dynamic balance of children and adolescents with sensorineural hearing loss.
Melo, Renato de Souza; Marinho, Sônia Elvira Dos Santos; Freire, Maryelly Evelly Araújo; Souza, Robson Arruda; Damasceno, Hélio Anderson Melo; Raposo, Maria Cristina Falcão
2017-01-01
To assess the static and dynamic balance performance of students with normal hearing and with sensorineural hearing loss. A cross-sectional study assessing 96 students, 48 with normal hearing and 48 with sensorineural hearing loss of both sexes, aged 7 and 18 years. To evaluate static balance, Romberg, Romberg-Barré and Fournier tests were used; and for the dynamic balance, we applied the Unterberger test. Hearing loss students showed more changes in static and dynamic balance as compared to normal hearing, in all tests used (p<0.001). The same difference was found when subjects were grouped by sex. For females, Romberg, Romberg-Barré, Fournier and Unterberger test p values were, respectively, p=0.004, p<0.001, p<0.001 and p=0.023; for males, the p values were p=0.009, p<0.001, p<0.001 and p=0.002, respectively. The same difference was observed when students were classified by age. For 7 to 10 years old students, the p values for Romberg, Romberg-Barré and Fournier tests were, respectively, p=0.007, p<0.001 and p=0.001; for those aged 11 and 14 years, the p values for Romberg, Romberg-Barré, Fournier and Unterberger tests were p=0.002, p<0.001, p<0.001 and p=0.015, respectively; and for those aged 15 and 18 years, the p values for Romberg-Barré, Fournier and Unterberger tests were, respectively, p=0.037, p<0.001 and p=0.037. Hearing-loss students showed more changes in static and dynamic balance comparing to normal hearing of same sex and age groups.
Reading instead of reasoning? Predictors of arithmetic skills in children with cochlear implants.
Huber, Maria; Kipman, Ulrike; Pletzer, Belinda
2014-07-01
The aim of the present study was to evaluate whether the arithmetic achievement of children with cochlear implants (CI) was lower or comparable to that of their normal hearing peers and to identify predictors of arithmetic achievement in children with CI. In particular we related the arithmetic achievement of children with CI to nonverbal IQ, reading skills and hearing variables. 23 children with CI (onset of hearing loss in the first 24 months, cochlear implantation in the first 60 months of life, atleast 3 years of hearing experience with the first CI) and 23 normal hearing peers matched by age, gender, and social background participated in this case control study. All attended grades two to four in primary schools. To assess their arithmetic achievement, all children completed the "Arithmetic Operations" part of the "Heidelberger Rechentest" (HRT), a German arithmetic test. To assess reading skills and nonverbal intelligence as potential predictors of arithmetic achievement, all children completed the "Salzburger Lesetest" (SLS), a German reading screening, and the Culture Fair Intelligence Test (CFIT), a nonverbal intelligence test. Children with CI did not differ significantly from hearing children in their arithmetic achievement. Correlation and regression analyses revealed that in children with CI, arithmetic achievement was significantly (positively) related to reading skills, but not to nonverbal IQ. Reading skills and nonverbal IQ were not related to each other. In normal hearing children, arithmetic achievement was significantly (positively) related to nonverbal IQ, but not to reading skills. Reading skills and nonverbal IQ were positively correlated. Hearing variables were not related to arithmetic achievement. Children with CI do not show lower performance in non-verbal arithmetic tasks, compared to normal hearing peers. Copyright © 2014. Published by Elsevier Ireland Ltd.
Auditory processing disorders, verbal disfluency, and learning difficulties: a case study.
Jutras, Benoît; Lagacé, Josée; Lavigne, Annik; Boissonneault, Andrée; Lavoie, Charlen
2007-01-01
This case study reports the findings of auditory behavioral and electrophysiological measures performed on a graduate student (identified as LN) presenting verbal disfluency and learning difficulties. Results of behavioral audiological testing documented the presence of auditory processing disorders, particularly temporal processing and binaural integration. Electrophysiological test results, including middle latency, late latency and cognitive potentials, revealed that LN's central auditory system processes acoustic stimuli differently to a reference group with normal hearing.
Language networks in anophthalmia: maintained hierarchy of processing in 'visual' cortex.
Watkins, Kate E; Cowey, Alan; Alexander, Iona; Filippini, Nicola; Kennedy, James M; Smith, Stephen M; Ragge, Nicola; Bridge, Holly
2012-05-01
Imaging studies in blind subjects have consistently shown that sensory and cognitive tasks evoke activity in the occipital cortex, which is normally visual. The precise areas involved and degree of activation are dependent upon the cause and age of onset of blindness. Here, we investigated the cortical language network at rest and during an auditory covert naming task in five bilaterally anophthalmic subjects, who have never received visual input. When listening to auditory definitions and covertly retrieving words, these subjects activated lateral occipital cortex bilaterally in addition to the language areas activated in sighted controls. This activity was significantly greater than that present in a control condition of listening to reversed speech. The lateral occipital cortex was also recruited into a left-lateralized resting-state network that usually comprises anterior and posterior language areas. Levels of activation to the auditory naming and reversed speech conditions did not differ in the calcarine (striate) cortex. This primary 'visual' cortex was not recruited to the left-lateralized resting-state network and showed high interhemispheric correlation of activity at rest, as is typically seen in unimodal cortical areas. In contrast, the interhemispheric correlation of resting activity in extrastriate areas was reduced in anophthalmia to the level of cortical areas that are heteromodal, such as the inferior frontal gyrus. Previous imaging studies in the congenitally blind show that primary visual cortex is activated in higher-order tasks, such as language and memory to a greater extent than during more basic sensory processing, resulting in a reversal of the normal hierarchy of functional organization across 'visual' areas. Our data do not support such a pattern of organization in anophthalmia. Instead, the patterns of activity during task and the functional connectivity at rest are consistent with the known hierarchy of processing in these areas normally seen for vision. The differences in cortical organization between bilateral anophthalmia and other forms of congenital blindness are considered to be due to the total absence of stimulation in 'visual' cortex by light or retinal activity in the former condition, and suggests development of subcortical auditory input to the geniculo-striate pathway.
Exploring the extent and function of higher-order auditory cortex in rhesus monkeys.
Poremba, Amy; Mishkin, Mortimer
2007-07-01
Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left-hemisphere "dominance" during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole "dominance" appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys.
Exploring the extent and function of higher-order auditory cortex in rhesus monkeys
Mishkin, Mortimer
2009-01-01
Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left hemisphere “dominance” during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole “dominance” appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys. PMID:17321703
Developmental Conductive Hearing Loss Reduces Modulation Masking Release
Chen, Yi-Wen; Sanes, Dan H.
2016-01-01
Hearing-impaired individuals experience difficulties in detecting or understanding speech, especially in background sounds within the same frequency range. However, normally hearing (NH) human listeners experience less difficulty detecting a target tone in background noise when the envelope of that noise is temporally gated (modulated) than when that envelope is flat across time (unmodulated). This perceptual benefit is called modulation masking release (MMR). When flanking masker energy is added well outside the frequency band of the target, and comodulated with the original modulated masker, detection thresholds improve further (MMR+). In contrast, if the flanking masker is antimodulated with the original masker, thresholds worsen (MMR−). These interactions across disparate frequency ranges are thought to require central nervous system (CNS) processing. Therefore, we explored the effect of developmental conductive hearing loss (CHL) in gerbils on MMR characteristics, as a test for putative CNS mechanisms. The detection thresholds of NH gerbils were lower in modulated noise, when compared with unmodulated noise. The addition of a comodulated flanker further improved performance, whereas an antimodulated flanker worsened performance. However, for CHL-reared gerbils, all three forms of masking release were reduced when compared with NH animals. These results suggest that developmental CHL impairs both within- and across-frequency processing and provide behavioral evidence that CNS mechanisms are affected by a peripheral hearing impairment. PMID:28215119
Bernstein, Leslie R; Trahiotis, Constantine
2016-11-01
This study assessed whether audiometrically-defined "slight" or "hidden" hearing losses might be associated with degradations in binaural processing as measured in binaural detection experiments employing interaurally delayed signals and maskers. Thirty-one listeners participated, all having no greater than slight hearing losses (i.e., no thresholds greater than 25 dB HL). Across the 31 listeners and consistent with the findings of Bernstein and Trahiotis [(2015). J. Acoust. Soc. Am. 138, EL474-EL479] binaural detection thresholds at 500 Hz and 4 kHz increased with increasing magnitude of interaural delay, suggesting a loss of precision of coding with magnitude of interaural delay. Binaural detection thresholds were consistently found to be elevated for listeners whose absolute thresholds at 4 kHz exceeded 7.5 dB HL. No such elevations were observed in conditions having no binaural cues available to aid detection (i.e., "monaural" conditions). Partitioning and analyses of the data revealed that those elevated thresholds (1) were more attributable to hearing level than to age and (2) result from increased levels of internal noise. The data suggest that listeners whose high-frequency monaural hearing status would be classified audiometrically as being normal or "slight loss" may exhibit substantial and perceptually meaningful losses of binaural processing.
Fontan, Lionel; Ferrané, Isabelle; Farinas, Jérôme; Pinquier, Julien; Tardieu, Julien; Magnen, Cynthia; Gaillard, Pascal; Aumont, Xavier; Füllgrabe, Christian
2017-09-18
The purpose of this article is to assess speech processing for listeners with simulated age-related hearing loss (ARHL) and to investigate whether the observed performance can be replicated using an automatic speech recognition (ASR) system. The long-term goal of this research is to develop a system that will assist audiologists/hearing-aid dispensers in the fine-tuning of hearing aids. Sixty young participants with normal hearing listened to speech materials mimicking the perceptual consequences of ARHL at different levels of severity. Two intelligibility tests (repetition of words and sentences) and 1 comprehension test (responding to oral commands by moving virtual objects) were administered. Several language models were developed and used by the ASR system in order to fit human performances. Strong significant positive correlations were observed between human and ASR scores, with coefficients up to .99. However, the spectral smearing used to simulate losses in frequency selectivity caused larger declines in ASR performance than in human performance. Both intelligibility and comprehension scores for listeners with simulated ARHL are highly correlated with the performances of an ASR-based system. In the future, it needs to be determined if the ASR system is similarly successful in predicting speech processing in noise and by older people with ARHL.
NASA Astrophysics Data System (ADS)
Ferguson, Sarah Hargus
2005-09-01
It is well known that, for listeners with normal hearing, speech produced by non-native speakers of the listener's first language is less intelligible than speech produced by native speakers. Intelligibility is well correlated with listener's ratings of talker comprehensibility and accentedness, which have been shown to be related to several talker factors, including age of second language acquisition and level of similarity between the talker's native and second language phoneme inventories. Relatively few studies have focused on factors extrinsic to the talker. The current project explored the effects of listener and environmental factors on the intelligibility of foreign-accented speech. Specifically, monosyllabic English words previously recorded from two talkers, one a native speaker of American English and the other a native speaker of Spanish, were presented to three groups of listeners (young listeners with normal hearing, elderly listeners with normal hearing, and elderly listeners with hearing impairment; n=20 each) in three different listening conditions (undistorted words in quiet, undistorted words in 12-talker babble, and filtered words in quiet). Data analysis will focus on interactions between talker accent, listener age, listener hearing status, and listening condition. [Project supported by American Speech-Language-Hearing Association AARC Award.
Masking Release in Children and Adults with Hearing Loss When Using Amplification
ERIC Educational Resources Information Center
Brennan, Marc; McCreery, Ryan; Kopun, Judy; Lewis, Dawna; Alexander, Joshua; Stelmachowicz, Patricia
2016-01-01
Purpose: This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared. Method: Sentence recognition in unmodulated noise was compared with recognition…
Sign Language and Pantomime Production Differentially Engage Frontal and Parietal Cortices
ERIC Educational Resources Information Center
Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Ponto, Laura L. B.; Grabowski, Thomas J.
2011-01-01
We investigated the functional organisation of neural systems supporting language production when the primary language articulators are also used for meaningful, but nonlinguistic, expression such as pantomime. Fourteen hearing nonsigners and 10 deaf native users of American Sign Language (ASL) participated in an H[subscript 2][superscript…
Ortiz Alonso, Tomás; Santos, Juan Matías; Ortiz Terán, Laura; Borrego Hernández, Mayelin; Poch Broto, Joaquín; de Erausquin, Gabriel Alejandro
2015-01-01
Compared to their seeing counterparts, people with blindness have a greater tactile capacity. Differences in the physiology of object recognition between people with blindness and seeing people have been well documented, but not when tactile stimuli require semantic processing. We used a passive vibrotactile device to focus on the differences in spatial brain processing evaluated with event related potentials (ERP) in children with blindness (n = 12) vs. normally seeing children (n = 12), when learning a simple spatial task (lines with different orientations) or a task involving recognition of letters, to describe the early stages of its temporal sequence (from 80 to 220 msec) and to search for evidence of multi-modal cortical organization. We analysed the P100 of the ERP. Children with blindness showed earlier latencies for cognitive (perceptual) event related potentials, shorter reaction times, and (paradoxically) worse ability to identify the spatial direction of the stimulus. On the other hand, they are equally proficient in recognizing stimuli with semantic content (letters). The last observation is consistent with the role of P100 on somatosensory-based recognition of complex forms. The cortical differences between seeing control and blind groups, during spatial tactile discrimination, are associated with activation in visual pathway (occipital) and task-related association (temporal and frontal) areas. The present results show that early processing of tactile stimulation conveying cross modal information differs in children with blindness or with normal vision.
Ortiz Alonso, Tomás; Santos, Juan Matías; Ortiz Terán, Laura; Borrego Hernández, Mayelin; Poch Broto, Joaquín; de Erausquin, Gabriel Alejandro
2015-01-01
Compared to their seeing counterparts, people with blindness have a greater tactile capacity. Differences in the physiology of object recognition between people with blindness and seeing people have been well documented, but not when tactile stimuli require semantic processing. We used a passive vibrotactile device to focus on the differences in spatial brain processing evaluated with event related potentials (ERP) in children with blindness (n = 12) vs. normally seeing children (n = 12), when learning a simple spatial task (lines with different orientations) or a task involving recognition of letters, to describe the early stages of its temporal sequence (from 80 to 220 msec) and to search for evidence of multi-modal cortical organization. We analysed the P100 of the ERP. Children with blindness showed earlier latencies for cognitive (perceptual) event related potentials, shorter reaction times, and (paradoxically) worse ability to identify the spatial direction of the stimulus. On the other hand, they are equally proficient in recognizing stimuli with semantic content (letters). The last observation is consistent with the role of P100 on somatosensory-based recognition of complex forms. The cortical differences between seeing control and blind groups, during spatial tactile discrimination, are associated with activation in visual pathway (occipital) and task-related association (temporal and frontal) areas. The present results show that early processing of tactile stimulation conveying cross modal information differs in children with blindness or with normal vision. PMID:26225827
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodman, L.R.; Teplick, S.K.; Kay, H.
The normal CT anatomy of the sternum was studied in 35 patients. In addition to the normal appearance of the sternum, normal variants that may mimic desease were often noted. In the manubrium, part of the posterior cortical margin was unsharp and irregular in 34 of 35 patients. Part of the anterior cortical margin was indistinct in 20 of the 35 patients. Angulation of the CT gantry to a position more nearly perpendicular to the manubrium improved the definition of the cortical margins. The body of the sternum was ovoid to rectangular and usually had sharp cortical margins. Sections throughmore » the manubriosternal joint and xyphoid often demonstrated irregular mottled calcifications and indistinct margins again simulating bony lesions. The rib insertions, sternal clavicular joints, and adjacent soft-tissue appearance also were evaluated.« less
Calandruccio, Lauren; Bradlow, Ann R; Dhar, Sumitrajit
2014-04-01
Masking release for an English sentence-recognition task in the presence of foreign-accented English speech compared with native-accented English speech was reported in Calandruccio et al (2010a). The masking release appeared to increase as the masker intelligibility decreased. However, it could not be ruled out that spectral differences between the speech maskers were influencing the significant differences observed. The purpose of the current experiment was to minimize spectral differences between speech maskers to determine how various amounts of linguistic information within competing speech Affiliationect masking release. A mixed-model design with within-subject (four two-talker speech maskers) and between-subject (listener group) factors was conducted. Speech maskers included native-accented English speech and high-intelligibility, moderate-intelligibility, and low-intelligibility Mandarin-accented English. Normalizing the long-term average speech spectra of the maskers to each other minimized spectral differences between the masker conditions. Three listener groups were tested, including monolingual English speakers with normal hearing, nonnative English speakers with normal hearing, and monolingual English speakers with hearing loss. The nonnative English speakers were from various native language backgrounds, not including Mandarin (or any other Chinese dialect). Listeners with hearing loss had symmetric mild sloping to moderate sensorineural hearing loss. Listeners were asked to repeat back sentences that were presented in the presence of four different two-talker speech maskers. Responses were scored based on the key words within the sentences (100 key words per masker condition). A mixed-model regression analysis was used to analyze the difference in performance scores between the masker conditions and listener groups. Monolingual English speakers with normal hearing benefited when the competing speech signal was foreign accented compared with native accented, allowing for improved speech recognition. Various levels of intelligibility across the foreign-accented speech maskers did not influence results. Neither the nonnative English-speaking listeners with normal hearing nor the monolingual English speakers with hearing loss benefited from masking release when the masker was changed from native-accented to foreign-accented English. Slight modifications between the target and the masker speech allowed monolingual English speakers with normal hearing to improve their recognition of native-accented English, even when the competing speech was highly intelligible. Further research is needed to determine which modifications within the competing speech signal caused the Mandarin-accented English to be less effective with respect to masking. Determining the influences within the competing speech that make it less effective as a masker or determining why monolingual normal-hearing listeners can take advantage of these differences could help improve speech recognition for those with hearing loss in the future. American Academy of Audiology.
Bess, F H; Dodd-Murphy, J; Parker, R A
1998-10-01
This study was designed to determine the prevalence of minimal sensorineural hearing loss (MSHL) in school-age children and to assess the relationship of MSHL to educational performance and functional status. To determine prevalence, a single-staged sampling frame of all schools in the district was created for 3rd, 6th, and 9th grades. Schools were selected with probability proportional to size in each grade group. The final study sample was 1218 children. To assess the association of MSHL with educational performance, children identified with MSHL were assigned as cases into a subsequent case-control study. Scores of the Comprehensive Test of Basic Skills (4th Edition) (CTBS/4) then were compared between children with MSHL and children with normal hearing. School teachers completed the Screening Instrument for Targeting Education Risk (SIFTER) and the Revised Behavior Problem Checklist for a subsample of children with MSHL and their normally hearing counterparts. Finally, data on grade retention for a sample of children with MSHL were obtained from school records and compared with school district norm data. To assess the relationship between MSHL and functional status, test scores of all children with MSHL and all children with normal hearing in grades 6 and 9 were compared on the COOP Adolescent Chart Method (COOP), a screening tool for functional status. MSHL was exhibited by 5.4% of the study sample. The prevalence of all types of hearing impairment was 11.3%. Third grade children with MSHL exhibited significantly lower scores than normally hearing controls on a series of subtests of the CTBS/4; however, no differences were noted at the 6th and 9th grade levels. The SIFTER results revealed that children with MSHL scored poorer on the communication subtest than normal-hearing controls. Thirty-seven percent of the children with MSHL failed at least one grade. Finally, children with MSHL exhibited significantly greater dysfunction than children with normal hearing on several subtests of the COOP including behavior, energy, stress, social support, and self-esteem. The prevalence of hearing loss in the schools almost doubles when children with MSHL are included. This large, education-based study shows clinically important associations between MSHL and school behavior and performance. Children with MSHL experienced more difficulty than normally hearing children on a series of educational and functional test measures. Although additional research is necessary, results suggest the need for audiologists, speech-language pathologists, and educators to evaluate carefully our identification and management approaches with this population. Better efforts to manage these children could result in meaningful improvement in their educational progress and psychosocial well-being.
Reinhart, Paul N; Souza, Pamela E
2018-01-01
Reverberation enhances music perception and is one of the most important acoustic factors in auditorium design. However, previous research on reverberant music perception has focused on young normal-hearing (YNH) listeners. Old hearing-impaired (OHI) listeners have degraded spatial auditory processing; therefore, they may perceive reverberant music differently. Two experiments were conducted examining the effects of varying reverberation on music perception for YNH and OHI listeners. Experiment 1 examined whether YNH listeners and OHI listeners prefer different amounts of reverberation for classical music listening. Symphonic excerpts were processed at a range of reverberation times using a point-source simulation. Listeners performed a paired-comparisons task in which they heard two excerpts with different reverberation times, and they indicated which they preferred. The YNH group preferred a reverberation time of 2.5 s; however, the OHI group did not demonstrate any significant preference. Experiment 2 examined whether OHI listeners are less sensitive to (e, less able to discriminate) differences in reverberation time than YNH listeners. YNH and OHI participants listened to pairs of music excerpts and indicated whether they perceived the same or different amount of reverberation. Results indicated that the ability of both groups to detect differences in reverberation time improved with increasing reverberation time difference. However, discrimination was poorer for the OHI group than for the YNH group. This suggests that OHI listeners are less sensitive to differences in reverberation when listening to music than YNH listeners, which might explain the lack of group reverberation time preferences of the OHI group.
Willis, Suzi; Goldbart, Juliet; Stansfield, Jois
2014-07-01
To compare verbal short-term memory and visual working memory abilities of six children with congenital hearing-impairment identified as having significant language learning difficulties with normative data from typically hearing children using standardized memory assessments. Six children with hearing loss aged 8-15 years were assessed on measures of verbal short-term memory (Non-word and word recall) and visual working memory annually over a two year period. All children had cognitive abilities within normal limits and used spoken language as the primary mode of communication. The language assessment scores at the beginning of the study revealed that all six participants exhibited delays of two years or more on standardized assessments of receptive and expressive vocabulary and spoken language. The children with hearing-impairment scores were significantly higher on the non-word recall task than the "real" word recall task. They also exhibited significantly higher scores on visual working memory than those of the age-matched sample from the standardized memory assessment. Each of the six participants in this study displayed the same pattern of strengths and weaknesses in verbal short-term memory and visual working memory despite their very different chronological ages. The children's poor ability to recall single syllable words in relation to non-words is a clinical indicator of their difficulties in verbal short-term memory. However, the children with hearing-impairment do not display generalized processing difficulties and indeed demonstrate strengths in visual working memory. The poor ability to recall words, in combination with difficulties with early word learning may be indicators of children with hearing-impairment who will struggle to develop spoken language equal to that of their normally hearing peers. This early identification has the potential to allow for target specific intervention that may remediate their difficulties. Copyright © 2014. Published by Elsevier Ireland Ltd.
High-Field Functional Imaging of Pitch Processing in Auditory Cortex of the Cat
Butler, Blake E.; Hall, Amee J.; Lomber, Stephen G.
2015-01-01
The perception of pitch is a widely studied and hotly debated topic in human hearing. Many of these studies combine functional imaging techniques with stimuli designed to disambiguate the percept of pitch from frequency information present in the stimulus. While useful in identifying potential “pitch centres” in cortex, the existence of truly pitch-responsive neurons requires single neuron-level measures that can only be undertaken in animal models. While a number of animals have been shown to be sensitive to pitch, few studies have addressed the location of cortical generators of pitch percepts in non-human models. The current study uses high-field functional magnetic resonance imaging (fMRI) of the feline brain in an attempt to identify regions of cortex that show increased activity in response to pitch-evoking stimuli. Cats were presented with iterated rippled noise (IRN) stimuli, narrowband noise stimuli with the same spectral profile but no perceivable pitch, and a processed IRN stimulus in which phase components were randomized to preserve slowly changing modulations in the absence of pitch (IRNo). Pitch-related activity was not observed to occur in either primary auditory cortex (A1) or the anterior auditory field (AAF) which comprise the core auditory cortex in cats. Rather, cortical areas surrounding the posterior ectosylvian sulcus responded preferentially to the IRN stimulus when compared to narrowband noise, with group analyses revealing bilateral activity centred in the posterior auditory field (PAF). This study demonstrates that fMRI is useful for identifying pitch-related processing in cat cortex, and identifies cortical areas that warrant further investigation. Moreover, we have taken the first steps in identifying a useful animal model for the study of pitch perception. PMID:26225563
Monshizadeh, Leila; Vameghi, Roshanak; Sajedi, Firoozeh; Yadegari, Fariba; Hashemi, Seyed Basir; Kirchem, Petra; Kasbi, Fatemeh
2018-04-01
A cochlear implant is a device that helps hearing-impaired children by transmitting sound signals to the brain and helping them improve their speech, language, and social interaction. Although various studies have investigated the different aspects of speech perception and language acquisition in cochlear-implanted children, little is known about their social skills, particularly Persian-speaking cochlear-implanted children. Considering the growing number of cochlear implants being performed in Iran and the increasing importance of developing near-normal social skills as one of the ultimate goals of cochlear implantation, this study was performed to compare the social interaction between Iranian cochlear-implanted children who have undergone rehabilitation (auditory verbal therapy) after surgery and normal-hearing children. This descriptive-analytical study compared the social interaction level of 30 children with normal hearing and 30 with cochlear implants who were conveniently selected. The Raven test was administered to the both groups to ensure normal intelligence quotient. The social interaction status of both groups was evaluated using the Vineland Adaptive Behavior Scale, and statistical analysis was performed using Statistical Package for Social Sciences (SPSS) version 21. After controlling age as a covariate variable, no significant difference was observed between the social interaction scores of both the groups (p > 0.05). In addition, social interaction had no correlation with sex in either group. Cochlear implantation followed by auditory verbal rehabilitation helps children with sensorineural hearing loss to have normal social interactions, regardless of their sex.
Speech Restoration: An Interactive Process
ERIC Educational Resources Information Center
Grataloup, Claire; Hoen, Michael; Veuillet, Evelyne; Collet, Lionel; Pellegrino, Francois; Meunier, Fanny
2009-01-01
Purpose: This study investigates the ability to understand degraded speech signals and explores the correlation between this capacity and the functional characteristics of the peripheral auditory system. Method: The authors evaluated the capability of 50 normal-hearing native French speakers to restore time-reversed speech. The task required them…
Goberis, Dianne; Beams, Dinah; Dalpes, Molly; Abrisch, Amanda; Baca, Rosalinda; Yoshinaga-Itano, Christine
2012-11-01
This article will provide information about the Pragmatics Checklist, which consists of 45 items and is scored as: (1) not present, (2) present but preverbal, (3) present with one to three words, and (4) present with complex language. Information for both children who are deaf or hard of hearing and those with normal hearing are presented. Children who are deaf or hard of hearing are significantly older when demonstrating skill with complex language than their normal hearing peers. In general, even at the age of 7 years, there are several items that are not mastered by 75% of the deaf or hard of hearing children. Additionally, the article will provide some suggestions of strategies that can be considered as a means to facilitate the development of these pragmatic language skills for children who are deaf or hard of hearing. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Stability of the Medial Olivocochlear Reflex as Measured by Distortion Product Otoacoustic Emissions
ERIC Educational Resources Information Center
Mishra, Srikanta K.; Abdala, Carolina
2015-01-01
Purpose: The purpose of this study was to assess the repeatability of a fine-resolution, distortion product otoacoustic emission (DPOAE)-based assay of the medial olivocochlear (MOC) reflex in normal-hearing adults. Method: Data were collected during 36 test sessions from 4 normal-hearing adults to assess short-term stability and 5 normal-hearing…
Processing of voices in deafness rehabilitation by auditory brainstem implant.
Coez, Arnaud; Zilbovicius, Monica; Ferrary, Evelyne; Bouccara, Didier; Mosnier, Isabelle; Ambert-Dahan, Emmanuèle; Kalamarides, Michel; Bizaguet, Eric; Syrota, André; Samson, Yves; Sterkers, Olivier
2009-10-01
The superior temporal sulcus (STS) is specifically involved in processing the human voice. Profound acquired deafness by post-meningitis ossified cochlea and by bilateral vestibular schwannoma in neurofibromatosis type 2 patients are two indications for auditory brainstem implantation (ABI). In order to objectively measure the cortical voice processing of a group of ABI patients, we studied the activation of the human temporal voice areas (TVA) by PET H(2)(15)O, performed in a group of implanted deaf adults (n=7) with more than two years of auditory brainstem implant experience, with an intelligibility score average of 17%+/-17 [mean+/-SD]. Relative cerebral blood flow (rCBF) was measured in the three following conditions: during silence, while passive listening to human voice, and to non-voice stimuli. Compared to silence, the activations induced by voice and non-voice stimuli were bilaterally located in the superior temporal regions. However, compared to non-voice stimuli, the voice stimuli did not induce specific supplementary activation of the TVA along the STS. The comparison of ABI group with a normal-hearing controls group (n=7) showed that TVA activations were significantly enhanced among controls group. ABI allowed the transmission of sound stimuli to temporal brain regions but lacked transmitting the specific cues of the human voice to the TVA. Moreover, among groups, during silent condition, brain visual regions showed higher rCBF in ABI group, although temporal brain regions had higher rCBF in the controls group. ABI patients had consequently developed enhanced visual strategies to keep interacting with their environment.
Dopamine-dependent periadolescent maturation of corticostriatal functional connectivity in mouse.
Galiñanes, Gregorio L; Taravini, Irene R E; Murer, M Gustavo
2009-02-25
Altered corticostriatal information processing associated with early dopamine systems dysfunction may contribute to attention deficit/hyperactivity disorder (ADHD). Mice with neonatal dopamine-depleting lesions exhibit hyperactivity that wanes after puberty and is reduced by psychostimulants, reminiscent of some aspects of ADHD. To assess whether the maturation of corticostriatal functional connectivity is altered by early dopamine depletion, we examined preadolescent and postadolescent urethane-anesthetized mice with or without dopamine-depleting lesions. Specifically, we assessed (1) synchronization between striatal neuron discharges and oscillations in frontal cortex field potentials and (2) striatal neuron responses to frontal cortex stimulation. In adult control mice striatal neurons were less spontaneously active, less responsive to cortical stimulation, and more temporally tuned to cortical rhythms than in infants. Striatal neurons from hyperlocomotor mice required more current to respond to cortical input and were less phase locked to ongoing oscillations, resulting in fewer neurons responding to refined cortical commands. By adulthood some electrophysiological deficits waned together with hyperlocomotion, but striatal spontaneous activity remained substantially elevated. Moreover, dopamine-depleted animals showing normal locomotor scores exhibited normal corticostriatal synchronization, suggesting that the lesion allows, but is not sufficient, for the emergence of corticostriatal changes and hyperactivity. Although amphetamine normalized corticostriatal tuning in hyperlocomotor mice, it reduced horizontal activity in dopamine-depleted animals regardless of their locomotor phenotype, suggesting that amphetamine modified locomotion through a parallel mechanism, rather than that modified by dopamine depletion. In summary, functional maturation of striatal activity continues after infancy, and early dopamine depletion delays the maturation of core functional capacities of the corticostriatal system.
Dopamine-dependent periadolescent maturation of corticostriatal functional connectivity in mouse
Galiñanes, Gregorio L.; Taravini, Irene R.E.; Murer, M. Gustavo
2009-01-01
Altered corticostriatal information processing associated with early dopamine systems dysfunction may contribute to attention deficit/hyperactivity disorder (ADHD). Mice with neonatal dopamine-depleting lesions exhibit hyperactivity that wanes after puberty and is reduced by psychostimulants, reminiscent of some aspects of ADHD. To assess whether the maturation of corticostriatal functional connectivity is altered by early dopamine depletion, we examined pre- and post-adolescent urethane-anesthetized mice with or without dopamine-depleting lesions. Specifically, we assessed (1) synchronization between striatal neuron discharges and oscillations in frontal cortex field potentials and (2) striatal neuron responses to frontal cortex stimulation. In adult control mice striatal neurons were less spontaneously active, less responsive to cortical stimulation and more temporally tuned to cortical rhythms than in infants. Striatal neurons from hyperlocomotor mice required more current to respond to cortical input and were less phase-locked to ongoing oscillations, resulting in fewer neurons responding to refined cortical commands. By adulthood some electrophysiological deficits waned together with hyperlocomotion, but striatal spontaneous activity remained substantially elevated. Moreover, dopamine-depleted animals showing normal locomotor scores exhibited normal corticostriatal synchronization, suggesting that the lesion allows, but is not sufficient, for the emergence of corticostriatal changes and hyperactivity. Although amphetamine normalized corticostriatal tuning in hyperlocomotor mice, it reduced horizontal activity in dopamine-depleted animals irrespective of their locomotor phenotype, suggesting that amphetamine modified locomotion through a parallel mechanism, rather than that modified by dopamine depletion. In summary, functional maturation of striatal activity continues after infancy, and early dopamine depletion delays the maturation of core functional capacities of the corticostriatal system. PMID:19244524
Füllgrabe, Christian; Moore, Brian C. J.; Stone, Michael A.
2015-01-01
Hearing loss with increasing age adversely affects the ability to understand speech, an effect that results partly from reduced audibility. The aims of this study were to establish whether aging reduces speech intelligibility for listeners with normal audiograms, and, if so, to assess the relative contributions of auditory temporal and cognitive processing. Twenty-one older normal-hearing (ONH; 60–79 years) participants with bilateral audiometric thresholds ≤ 20 dB HL at 0.125–6 kHz were matched to nine young (YNH; 18–27 years) participants in terms of mean audiograms, years of education, and performance IQ. Measures included: (1) identification of consonants in quiet and in noise that was unmodulated or modulated at 5 or 80 Hz; (2) identification of sentences in quiet and in co-located or spatially separated two-talker babble; (3) detection of modulation of the temporal envelope (TE) at frequencies 5–180 Hz; (4) monaural and binaural sensitivity to temporal fine structure (TFS); (5) various cognitive tests. Speech identification was worse for ONH than YNH participants in all types of background. This deficit was not reflected in self-ratings of hearing ability. Modulation masking release (the improvement in speech identification obtained by amplitude modulating a noise background) and spatial masking release (the benefit obtained from spatially separating masker and target speech) were not affected by age. Sensitivity to TE and TFS was lower for ONH than YNH participants, and was correlated positively with speech-in-noise (SiN) identification. Many cognitive abilities were lower for ONH than YNH participants, and generally were correlated positively with SiN identification scores. The best predictors of the intelligibility of SiN were composite measures of cognition and TFS sensitivity. These results suggest that declines in speech perception in older persons are partly caused by cognitive and perceptual changes separate from age-related changes in audiometric sensitivity. PMID:25628563
Mehraei, Golbarg; Gallun, Frederick J; Leek, Marjorie R; Bernstein, Joshua G W
2014-07-01
Poor speech understanding in noise by hearing-impaired (HI) listeners is only partly explained by elevated audiometric thresholds. Suprathreshold-processing impairments such as reduced temporal or spectral resolution or temporal fine-structure (TFS) processing ability might also contribute. Although speech contains dynamic combinations of temporal and spectral modulation and TFS content, these capabilities are often treated separately. Modulation-depth detection thresholds for spectrotemporal modulation (STM) applied to octave-band noise were measured for normal-hearing and HI listeners as a function of temporal modulation rate (4-32 Hz), spectral ripple density [0.5-4 cycles/octave (c/o)] and carrier center frequency (500-4000 Hz). STM sensitivity was worse than normal for HI listeners only for a low-frequency carrier (1000 Hz) at low temporal modulation rates (4-12 Hz) and a spectral ripple density of 2 c/o, and for a high-frequency carrier (4000 Hz) at a high spectral ripple density (4 c/o). STM sensitivity for the 4-Hz, 4-c/o condition for a 4000-Hz carrier and for the 4-Hz, 2-c/o condition for a 1000-Hz carrier were correlated with speech-recognition performance in noise after partialling out the audiogram-based speech-intelligibility index. Poor speech-reception and STM-detection performance for HI listeners may be related to a combination of reduced frequency selectivity and a TFS-processing deficit limiting the ability to track spectral-peak movements.
Dincer D'Alessandro, Hilal; Filipo, Roberto; Ballantyne, Deborah; Attanasio, Giuseppe; Bosco, Ersilia; Nicastri, Maria; Mancini, Patrizia
2015-11-01
The aim of the present study was to investigate the application of two new pitch perception tests in children with cochlear implants (CI) and to compare CI outcomes to normal hearing (NH) children, as well as investigating the effect of chronological age on performance. The tests were believed to be linked to the availability of Temporal Fine Structure (TFS) cues. 20 profoundly deaf children with CI (5-17 years) and 31 NH peers participated in the study. Harmonic Intonation (HI) and Disharmonic Intonation (DI) tests were used to measure low-frequency pitch perception. HI/DI outcomes were found poorer in children with CI. CI and NH groups showed a statistically significant difference (p < 0.001). HI scores were better than those of DI test (p < 0.001). Chronological age had a significant effect on DI performance in NH group (p < 0.05); children under the age of 8.5 years showed larger inter-subject-variability; however, the majority of NH children showed outcomes that were considered normal at adult-level. For the DI test, bimodal listeners had better performance than when listening with CI alone. HI/DI tests were applicable as clinical tools in the pediatric population. The majority of CI users showed abnormal outcomes on both tests confirming poor TFS processing in the hearing-impaired population. Findings indicated that the DI test provided more differential low-frequency pitch perception outcomes in that it reflected phase locking and TFS processing capacities of the ear, whereas HI test provided information of its place coding capacity as well.
Binaural fusion and the representation of virtual pitch in the human auditory cortex.
Pantev, C; Elbert, T; Ross, B; Eulitz, C; Terhardt, E
1996-10-01
The auditory system derives the pitch of complex tones from the tone's harmonics. Research in psychoacoustics predicted that binaural fusion was an important feature of pitch processing. Based on neuromagnetic human data, the first neurophysiological confirmation of binaural fusion in hearing is presented. The centre of activation within the cortical tonotopic map corresponds to the location of the perceived pitch and not to the locations that are activated when the single frequency constituents are presented. This is also true when the different harmonics of a complex tone are presented dichotically. We conclude that the pitch processor includes binaural fusion to determine the particular pitch location which is activated in the auditory cortex.
Speech Intelligibility and Prosody Production in Children with Cochlear Implants
Chin, Steven B.; Bergeson, Tonya R.; Phan, Jennifer
2012-01-01
Objectives The purpose of the current study was to examine the relation between speech intelligibility and prosody production in children who use cochlear implants. Methods The Beginner's Intelligibility Test (BIT) and Prosodic Utterance Production (PUP) task were administered to 15 children who use cochlear implants and 10 children with normal hearing. Adult listeners with normal hearing judged the intelligibility of the words in the BIT sentences, identified the PUP sentences as one of four grammatical or emotional moods (i.e., declarative, interrogative, happy, or sad), and rated the PUP sentences according to how well they thought the child conveyed the designated mood. Results Percent correct scores were higher for intelligibility than for prosody and higher for children with normal hearing than for children with cochlear implants. Declarative sentences were most readily identified and received the highest ratings by adult listeners; interrogative sentences were least readily identified and received the lowest ratings. Correlations between intelligibility and all mood identification and rating scores except declarative were not significant. Discussion The findings suggest that the development of speech intelligibility progresses ahead of prosody in both children with cochlear implants and children with normal hearing; however, children with normal hearing still perform better than children with cochlear implants on measures of intelligibility and prosody even after accounting for hearing age. Problems with interrogative intonation may be related to more general restrictions on rising intonation, and the correlation results indicate that intelligibility and sentence intonation may be relatively dissociated at these ages. PMID:22717120
Free Field Word recognition test in the presence of noise in normal hearing adults.
Almeida, Gleide Viviani Maciel; Ribas, Angela; Calleros, Jorge
In ideal listening situations, subjects with normal hearing can easily understand speech, as can many subjects who have a hearing loss. To present the validation of the Word Recognition Test in a Free Field in the Presence of Noise in normal-hearing adults. Sample consisted of 100 healthy adults over 18 years of age with normal hearing. After pure tone audiometry, a speech recognition test was applied in free field condition with monosyllables and disyllables, with standardized material in three listening situations: optimal listening condition (no noise), with a signal to noise ratio of 0dB and a signal to noise ratio of -10dB. For these tests, an environment in calibrated free field was arranged where speech was presented to the subject being tested from two speakers located at 45°, and noise from a third speaker, located at 180°. All participants had speech audiometry results in the free field between 88% and 100% in the three listening situations. Word Recognition Test in Free Field in the Presence of Noise proved to be easy to be organized and applied. The results of the test validation suggest that individuals with normal hearing should get between 88% and 100% of the stimuli correct. The test can be an important tool in measuring noise interference on the speech perception abilities. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Effects of sensorineural hearing loss on visually guided attention in a multitalker environment.
Best, Virginia; Marrone, Nicole; Mason, Christine R; Kidd, Gerald; Shinn-Cunningham, Barbara G
2009-03-01
This study asked whether or not listeners with sensorineural hearing loss have an impaired ability to use top-down attention to enhance speech intelligibility in the presence of interfering talkers. Listeners were presented with a target string of spoken digits embedded in a mixture of five spatially separated speech streams. The benefit of providing simple visual cues indicating when and/or where the target would occur was measured in listeners with hearing loss, listeners with normal hearing, and a control group of listeners with normal hearing who were tested at a lower target-to-masker ratio to equate their baseline (no cue) performance with the hearing-loss group. All groups received robust benefits from the visual cues. The magnitude of the spatial-cue benefit, however, was significantly smaller in listeners with hearing loss. Results suggest that reduced utility of selective attention for resolving competition between simultaneous sounds contributes to the communication difficulties experienced by listeners with hearing loss in everyday listening situations.
Strait, Dana L.; Kraus, Nina
2011-01-01
Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments. PMID:21716636
Shining a light on posterior cortical atrophy.
Crutch, Sebastian J; Schott, Jonathan M; Rabinovici, Gil D; Boeve, Bradley F; Cappa, Stefano F; Dickerson, Bradford C; Dubois, Bruno; Graff-Radford, Neill R; Krolak-Salmon, Pierre; Lehmann, Manja; Mendez, Mario F; Pijnenburg, Yolande; Ryan, Natalie S; Scheltens, Philip; Shakespeare, Tim; Tang-Wai, David F; van der Flier, Wiesje M; Bain, Lisa; Carrillo, Maria C; Fox, Nick C
2013-07-01
Posterior cortical atrophy (PCA) is a clinicoradiologic syndrome characterized by progressive decline in visual processing skills, relatively intact memory and language in the early stages, and atrophy of posterior brain regions. Misdiagnosis of PCA is common, owing not only to its relative rarity and unusual and variable presentation, but also because patients frequently first seek the opinion of an ophthalmologist, who may note normal eye examinations by their usual tests but may not appreciate cortical brain dysfunction. Seeking to raise awareness of the disease, stimulate research, and promote collaboration, a multidisciplinary group of PCA research clinicians formed an international working party, which had its first face-to-face meeting on July 13, 2012 in Vancouver, Canada, prior to the Alzheimer's Association International Conference. Copyright © 2013 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Wenjing; He, Huiguang; Lu, Jingjing; Wang, Chunheng; Li, Meng; Lv, Bin; Jin, Zhengyu
2011-03-01
Hippocampal sclerosis (HS) is the most common damage seen in the patients with temporal lobe epilepsy (TLE). In the present study, the hippocampal-cortical connectivity was defined as the correlation between the hippocampal volume and cortical thickness at each vertex throughout the whole brain. We aimed to investigate the differences of ipsilateral hippocampal-cortical connectivity between the unilateral TLE-HS patients and the normal controls. In our study, the bilateral hippocampal volumes were first measured in each subject, and we found that the ipsilateral hippocampal volume significantly decreased in the left TLE-HS patients. Then, group analysis showed significant thinner average cortical thickness of the whole brain in the left TLE-HS patients compared with the normal controls. We found significantly increased ipsilateral hippocampal-cortical connectivity in the bilateral superior temporal gyrus, the right cingulate gyrus and the left parahippocampal gyrus of the left TLE-HS patients, which indicated structural vulnerability related to the hippocampus atrophy in the patient group. However, for the right TLE-HS patients, no significant differences were found between the patients and the normal controls, regardless of the ipsilateral hippocampal volume, the average cortical thickness or the patterns of hippocampal-cortical connectivity, which might be related to less atrophies observed in the MRI scans. Our study provided more evidence for the structural abnormalities in the unilateral TLE-HS patients.
Aging process alters hippocampal and cortical secretase activities of Wistar rats.
Bertoldi, Karine; Cechinel, Laura Reck; Schallenberger, Bruna; Meireles, Louisiana; Basso, Carla; Lovatel, Gisele Agustini; Bernardi, Lisiane; Lamers, Marcelo Lazzaron; Siqueira, Ionara Rodrigues
2017-01-15
A growing body of evidence has demonstrated amyloid plaques in aged brain; however, little attention has been given to amyloid precursor protein (APP) processing machinery during the healthy aging process. The amyloidogenic and non-amyloidogenic pathways, represented respectively by β- and α-secretases (BACE and TACE), are responsible for APP cleavage. Our working hypothesis is that the normal aging process could imbalance amyloidogenic and non-amyloidogenic pathways specifically BACE and TACE activities. Besides, although it has been showed that exercise can modulate secretase activities in Alzheimer Disease models the relationship between exercise effects and APP processing during healthy aging process is rarely studied. Our aim was to investigate the aging process and the exercise effects on cortical and hippocampal BACE and TACE activities and aversive memory performance. Young adult and aged Wistar rats were subjected to an exercise protocol (20min/day for 2 weeks) and to inhibitory avoidance task. Biochemical parameters were evaluated 1h and 18h after the last exercise session in order to verify transitory and delayed exercise effects. Aged rats exhibited impaired aversive memory and diminished cortical TACE activity. Moreover, an imbalance between TACE and BACE activities in favor of BACE activity was observed in aged brain. Moderate treadmill exercise was unable to alter secretase activities in any brain areas or time points evaluated. Our results suggest that aging-related aversive memory decline is partly linked to decreased cortical TACE activity. Additionally, an imbalance between secretase activities can be related to the higher vulnerability to neurodegenerative diseases induced by aging. Copyright © 2016 Elsevier B.V. All rights reserved.
Influences of Working Memory and Audibility on Word Learning in Children with Hearing Loss
ERIC Educational Resources Information Center
Stiles, Derek Jason
2010-01-01
As a group, children with hearing loss demonstrate delays in language development relative to their peers with normal hearing. Early intervention has a profound impact on language outcomes in children with hearing loss. Data examining the relationship between degree of hearing loss and language outcomes are variable. Two approaches are used in the…
Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Rahimi, Zahra; Mayahi, Anis
2017-02-01
It is important for clinician such as speech-language pathologists and audiologists to develop more efficient procedures to assess the development of auditory, speech and language skills in children using hearing aid and/or cochlear implant compared to their peers with normal hearing. So, the aim of study was the comparison of the performance of 5-to-7-year-old Persian-language children with and without hearing loss in visual-only, auditory-only, and audiovisual presentation of sentence repetition task. The research was administered as a cross-sectional study. The sample size was 92 Persian 5-7 year old children including: 60 with normal hearing and 32 with hearing loss. The children with hearing loss were recruited from Soroush rehabilitation center for Persian-language children with hearing loss in Shiraz, Iran, through consecutive sampling method. All the children had unilateral cochlear implant or bilateral hearing aid. The assessment tool was the Sentence Repetition Test. The study included three computer-based experiments including visual-only, auditory-only, and audiovisual. The scores were compared within and among the three groups through statistical tests in α = 0.05. The score of sentence repetition task between V-only, A-only, and AV presentation was significantly different in the three groups; in other words, the highest to lowest scores belonged respectively to audiovisual, auditory-only, and visual-only format in the children with normal hearing (P < 0.01), cochlear implant (P < 0.01), and hearing aid (P < 0.01). In addition, there was no significant correlationship between the visual-only and audiovisual sentence repetition scores in all the 5-to-7-year-old children (r = 0.179, n = 92, P = 0.088), but audiovisual sentence repetition scores were found to be strongly correlated with auditory-only scores in all the 5-to-7-year-old children (r = 0.943, n = 92, P = 0.000). According to the study's findings, audiovisual integration occurs in the 5-to-7-year-old Persian children using hearing aid or cochlear implant during sentence repetition similar to their peers with normal hearing. Therefore, it is recommended that audiovisual sentence repetition should be used as a clinical criterion for auditory development in Persian-language children with hearing loss. Copyright © 2016. Published by Elsevier B.V.
Amygdala reactivity in healthy adults is correlated with prefrontal cortical thickness.
Foland-Ross, Lara C; Altshuler, Lori L; Bookheimer, Susan Y; Lieberman, Matthew D; Townsend, Jennifer; Penfold, Conor; Moody, Teena; Ahlf, Kyle; Shen, Jim K; Madsen, Sarah K; Rasser, Paul E; Toga, Arthur W; Thompson, Paul M
2010-12-08
Recent evidence suggests that putting feelings into words activates the prefrontal cortex (PFC) and suppresses the response of the amygdala, potentially helping to alleviate emotional distress. To further elucidate the relationship between brain structure and function in these regions, structural and functional magnetic resonance imaging (MRI) data were collected from a sample of 20 healthy human subjects. Structural MRI data were processed using cortical pattern-matching algorithms to produce spatially normalized maps of cortical thickness. During functional scanning, subjects cognitively assessed an emotional target face by choosing one of two linguistic labels (label emotion condition) or matched geometric forms (control condition). Manually prescribed regions of interest for the left amygdala were used to extract percentage signal change in this region occurring during the contrast of label emotion versus match forms. A correlation analysis between left amygdala activation and cortical thickness was then performed along each point of the cortical surface, resulting in a color-coded r value at each cortical point. Correlation analyses revealed that gray matter thickness in left ventromedial PFC was inversely correlated with task-related activation in the amygdala. These data add support to a general role of the ventromedial PFC in regulating activity of the amygdala.
Spyridakou, Chrysa; Luxon, Linda M; Bamiou, Doris E
2012-07-01
To compare self-reported symptoms of difficulty hearing speech in noise and hyperacusis in adults with auditory processing disorders (APDs) and normal controls; and to compare self-reported symptoms to objective test results (speech in babble test, transient evoked otoacoustic emission [TEOAE] suppression test using contralateral noise). A prospective case-control pilot study. Twenty-two participants were recruited in the study: 10 patients with reported hearing difficulty, normal audiometry, and a clinical diagnosis of APD; and 12 normal age-matched controls with no reported hearing difficulty. All participants completed the validated Amsterdam Inventory for Auditory Disability questionnaire, a hyperacusis questionnaire, a speech in babble test, and a TEOAE suppression test using contralateral noise. Patients had significantly worse scores than controls in all domains of the Amsterdam Inventory questionnaire (with the exception of sound detection) and the hyperacusis questionnaire (P < .005). Patients also had worse TEOAE suppression test results in both ears than controls; however, this result was not significant after Bonferroni correction. Strong correlations were observed between self-reported symptoms of difficulty hearing speech in noise and speech in babble test results in the right ear (ρ = 0.624, P = .002), and between self-reported symptoms of hyperacusis and TEOAE suppression test results in the right ear (ρ = -0.597 P = .003). There was no significant correlation between the two tests. A strong correlation was observed between right ear speech in babble and patient-reported intelligibility of speech in noise, and right ear TEOAE suppression by contralateral noise and hyperacusis questionnaire. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.
NASA Technical Reports Server (NTRS)
Schatten, H.; Chakrabarti, A.; Taylor, M.; Sommer, L.; Levine, H.; Anderson, K.; Runco, M.; Kemp, R.
1999-01-01
Calcium loss and muscle atrophy are two of the main metabolic changes experienced by astronauts and crew members during exposure to microgravity in space. Calcium and cytoskeletal events were investigated within sea urchin embryos which were cultured in space under both microgravity and 1 g conditions. Embryos were fixed at time-points ranging from 3 h to 8 days after fertilization. Investigative emphasis was placed upon: (1) sperm-induced calcium-dependent exocytosis and cortical granule secretion, (2) membrane fusion of cortical granule and plasma membranes; (3) microfilament polymerization and microvilli elongation; and (5) embryonic development into morula, blastula, gastrula, and pluteus stages. For embryos cultured under microgravity conditions, the processes of cortical granule discharge, fusion of cortical granule membranes with the plasma membrane, elongation of microvilli and elevation of the fertilization coat were reduced in comparison with embryos cultured at 1 g in space and under normal conditions on Earth. Also, 4% of all cells undergoing division in microgravity showed abnormalities in the centrosome-centriole complex. These abnormalities were not observed within the 1 g flight and ground control specimens, indicating that significant alterations in sea urchin development processes occur under microgravity conditions. Copyright 1999 Academic Press.
Fuller, Christina D.; Galvin, John J.; Maat, Bert; Free, Rolien H.; Başkent, Deniz
2014-01-01
Cochlear implants (CIs) are auditory prostheses that restore hearing via electrical stimulation of the auditory nerve. Compared to normal acoustic hearing, sounds transmitted through the CI are spectro-temporally degraded, causing difficulties in challenging listening tasks such as speech intelligibility in noise and perception of music. In normal hearing (NH), musicians have been shown to better perform than non-musicians in auditory processing and perception, especially for challenging listening tasks. This “musician effect” was attributed to better processing of pitch cues, as well as better overall auditory cognitive functioning in musicians. Does the musician effect persist when pitch cues are degraded, as it would be in signals transmitted through a CI? To answer this question, NH musicians and non-musicians were tested while listening to unprocessed signals or to signals processed by an acoustic CI simulation. The task increasingly depended on pitch perception: (1) speech intelligibility (words and sentences) in quiet or in noise, (2) vocal emotion identification, and (3) melodic contour identification (MCI). For speech perception, there was no musician effect with the unprocessed stimuli, and a small musician effect only for word identification in one noise condition, in the CI simulation. For emotion identification, there was a small musician effect for both. For MCI, there was a large musician effect for both. Overall, the effect was stronger as the importance of pitch in the listening task increased. This suggests that the musician effect may be more rooted in pitch perception, rather than in a global advantage in cognitive processing (in which musicians would have performed better in all tasks). The results further suggest that musical training before (and possibly after) implantation might offer some advantage in pitch processing that could partially benefit speech perception, and more strongly emotion and music perception. PMID:25071428
Li, Gang; Wang, Li; Shi, Feng; Lyall, Amanda E; Lin, Weili; Gilmore, John H; Shen, Dinggang
2014-03-19
Human cortical folding is believed to correlate with cognitive functions. This likely correlation may have something to do with why abnormalities of cortical folding have been found in many neurodevelopmental disorders. However, little is known about how cortical gyrification, the cortical folding process, develops in the first 2 years of life, a period of dynamic and regionally heterogeneous cortex growth. In this article, we show how we developed a novel infant-specific method for mapping longitudinal development of local cortical gyrification in infants. By using this method, via 219 longitudinal 3T magnetic resonance imaging scans from 73 healthy infants, we systemically and quantitatively characterized for the first time the longitudinal cortical global gyrification index (GI) and local GI (LGI) development in the first 2 years of life. We found that the cortical GI had age-related and marked development, with 16.1% increase in the first year and 6.6% increase in the second year. We also found marked and regionally heterogeneous cortical LGI development in the first 2 years of life, with the high-growth regions located in the association cortex, whereas the low-growth regions located in sensorimotor, auditory, and visual cortices. Meanwhile, we also showed that LGI growth in most cortical regions was positively correlated with the brain volume growth, which is particularly significant in the prefrontal cortex in the first year. In addition, we observed gender differences in both cortical GIs and LGIs in the first 2 years, with the males having larger GIs than females at 2 years of age. This study provides valuable information on normal cortical folding development in infancy and early childhood.
Positive Experiences and Life Aspirations among Adolescents with and without Hearing Impairments.
ERIC Educational Resources Information Center
Magen, Zipora
1990-01-01
Comparison of 79 normally hearing and 42 hearing-impaired adolescents found no differences regarding the intensity of their remembered positive experiences. Hearing-impaired subjects reported more positive interpersonal experiences, rarely experienced positive experiences "with self," and showed less desire for transpersonal commitment,…
38 CFR 17.149 - Sensori-neural aids.
Code of Federal Regulations, 2010 CFR
2010-07-01
... attendance or by reason of being permanently housebound; (6) Those who have a visual or hearing impairment... normally occurring visual or hearing impairments; and (8) Those visually or hearing impaired so severely... frequency ranges which contribute to a loss of communication ability; however, hearing aids are to be...