Daneshi, Ahmad; Mirsalehi, Marjan; Hashemi, Seyed Basir; Ajalloueyan, Mohammad; Rajati, Mohsen; Ghasemi, Mohammad Mahdi; Emamdjomeh, Hesamaldin; Asghari, Alimohamad; Mohammadi, Shabahang; Mohseni, Mohammad; Mohebbi, Saleh; Farhadi, Mohammad
2018-05-01
To evaluate the auditory performance and speech production outcome in children with auditory neuropathy spectrum disorder (ANSD). The effect of age on the outcomes of the surgery at the time of implantation was also evaluated. Cochlear implantation was performed in 136 children with bilateral severe-to- profound hearing loss due to ANSD, at four tertiary academic centers. The patients were divided into two groups based on the age at the time of implantation; Group I: Children ≤24 months, and Group II: subjects >24 months. The categories of auditory performance (CAP) and speech intelligibility rating (SIR) scores were evaluated after the first and second years of implantation. The differences between the CAP and SIR scores in the two groups were assessed. The median CAP scores improved significantly after the cochlear implantation in all the patients (p value < 0.001). The improvement in the CAP scores during the first year in Group II was greater than Group I (p value: 0.007), but the improvement in CAP scores tended to be significantly higher in patients who were implanted at ≤24 months (p value < 0.001). There was no significant difference between two groups in SIR scores at first-year and second-year follow-ups. The evaluation of the SIR improvement revealed significantly higher values for Group I during the second-year follow-up (p value: 0.003). The auditory performance and speech production skills of the children with ANSD improved significantly after cochlear implantation, and this improvement was affected by age at the time of implantation. Copyright © 2018 Elsevier B.V. All rights reserved.
The Development of Auditory Perception in Children Following Auditory Brainstem Implantation
Colletti, Liliana; Shannon, Robert V.; Colletti, Vittorio
2014-01-01
Auditory brainstem implants (ABI) can provide useful auditory perception and language development in deaf children who are not able to use a cochlear implant (CI). We prospectively followed-up a consecutive group of 64 deaf children up to 12 years following ABI implantation. The etiology of deafness in these children was: cochlear nerve aplasia in 49, auditory neuropathy in 1, cochlear malformations in 8, bilateral cochlear post-meningitic ossification in 3, NF2 in 2, and bilateral cochlear fractures due to a head injury in 1. Thirty five children had other congenital non-auditory disabilities. Twenty two children had previous CIs with no benefit. Fifty eight children were fitted with the Cochlear 24 ABI device and six with the MedEl ABI device and all children followed the same rehabilitation program. Auditory perceptual abilities were evaluated on the Categories of Auditory Performance (CAP) scale. No child was lost to follow-up and there were no exclusions from the study. All children showed significant improvement in auditory perception with implant experience. Seven children (11%) were able to achieve the highest score on the CAP test; they were able to converse on the telephone within 3 years of implantation. Twenty children (31.3%) achieved open set speech recognition (CAP score of 5 or greater) and 30 (46.9%) achieved a CAP level of 4 or greater. Of the 29 children without non-auditory disabilities, 18 (62%) achieved a CAP score of 5 or greater with the ABI. All children showed continued improvements in auditory skills over time. The long-term results of ABI implantation reveal significant auditory benefit in most children, and open set auditory recognition in many. PMID:25377987
ERIC Educational Resources Information Center
Tillery, Kim L.; Katz, Jack; Keller, Warren D.
2000-01-01
A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…
Systematic review of compound action potentials as predictors for cochlear implant performance.
van Eijl, Ruben H M; Buitenhuis, Patrick J; Stegeman, Inge; Klis, Sjaak F L; Grolman, Wilko
2017-02-01
The variability in speech perception between cochlear implant users is thought to result from the degeneration of the auditory nerve. Degeneration of the auditory nerve, histologically assessed, correlates with electrophysiologically acquired measures, such as electrically evoked compound action potentials (eCAPs) in experimental animals. To predict degeneration of the auditory nerve in humans, where histology is impossible, this paper reviews the correlation between speech perception and eCAP recordings in cochlear implant patients. PubMed and Embase. We performed a systematic search for articles containing the following major themes: cochlear implants, evoked potentials, and speech perception. Two investigators independently conducted title-abstract screening, full-text screening, and critical appraisal. Data were extracted from the remaining articles. Twenty-five of 1,429 identified articles described a correlation between speech perception and eCAP attributes. Due to study heterogeneity, a meta-analysis was not feasible, and studies were descriptively analyzed. Several studies investigating presence of the eCAP, recovery time constant, slope of the amplitude growth function, and spatial selectivity showed significant correlations with speech perception. In contrast, neural adaptation, eCAP threshold, and change with varying interphase gap did not significantly correlate with speech perception in any of the identified studies. Significant correlations between speech perception and parameters obtained through eCAP recordings have been documented in literature; however, reporting was ambiguous. There is insufficient evidence for eCAPs as a predictive factor for speech perception. More research is needed to further investigate this relation. Laryngoscope, 2016 127:476-487, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Electrocochleographic analysis of the suppression of tinnitus by electrical promontory stimulation.
Watanabe, K; Okawara, D; Baba, S; Yagi, T
1997-01-01
To investigate the origin, and evaluate the mechanism by which tinnitus is suppressed we performed electrical promontory stimulation (EPS) in 56 patients with tinnitus, and measured the compound action potential (CAP) using electrocochleography before and after EPS. In the group of patients in whom tinnitus was suppressed, the CAP amplitudes increased significantly, whereas the latencies showed no remarkable change. In the group of patients in whom tinnitus was not suppressed, both the CAP amplitudes and latencies exhibited no significant change. These data indicate that the effect on the cochlear nerve plays an important role in the suppression of tinnitus by EPS. The CAP reflects the number of the auditory nerve fibers which discharge synchronously. It is speculated that an increase of the CAP amplitudes is caused by synchronizing discharges of the auditory nerve fibers, and that the mechanism by which EPS suppresses tinnitus may be related to synchronizing these discharges.
Waardenburg Syndrome: An Unusual Indication of Cochlear Implantation Experienced in 11 Patients.
Bayrak, Feda; Çatlı, Tolgahan; Atsal, Görkem; Tokat, Taşkın; Olgun, Levent
2017-08-01
The aim of this study was to present the surgical findings of children with Waardenburg syndrome (WS) and investigate speech development after cochlear implantation in this unique group of patients. A retrospective chart review of the patients diagnosed with WS and implanted between 1998 and 2015 was performed. Categories of auditory performance (CAP) test were used to assess the auditory skills of these patients. CAP is a nonlinear hierarchical scale used to rate a child's developing auditory abilities. Preoperative test results and intraoperative surgical findings of these patients have been presented. In total, 1835 cases were implanted a tour institution, and 1210 of these were children. Among these implantees, 11 were diagnosed with WS (0.59% of all implantees). Four of the 11 patients showed incomplete partition type 2bony labyrinth abnormality (Mondini deformity) and all patients showed intraoperative gusher during cochleostomy, which was subsided through routine interventions. No other complications occurred during surgery, and all patients showed satisfactory CAP results in the late postoperative period. Our experiences with cochlear implantation in patients with WS showed that the procedure is safe and effective in this group of patients. Surgeons should be aware of possible labyrinth malformations and intraoperative problems such as gusher in these patients. In long term, auditory performances may exhibit satisfactory results with optimal postoperative educational and supportive measures.
Cohen-Mimran, Ravit; Sapir, Shimon
2008-01-01
To assess the relationships between central auditory processing (CAP) of sinusoidally modulated speech-like and non-speech acoustic signals and reading skills in shallow (pointed) and deep (unpointed) Hebrew orthographies. Twenty unselected fifth-grade Hebrew speakers performed a rate change detection (RCD) task using the aforementioned acoustic signals. They also performed reading and general ability (IQ) tests. After controlling for general ability, RCD tasks contributed a significant unique variance to the decoding skills. In addition, there was a fairly strong correlation between the score on the RCD with the speech-like stimuli and the unpointed text reading score. CAP abilities may affect reading skills, depending on the nature of orthography (deep vs shallow), at least in the Hebrew language.
Auditory Cortex Basal Activity Modulates Cochlear Responses in Chinchillas
León, Alex; Elgueda, Diego; Silva, María A.; Hamamé, Carlos M.; Delano, Paul H.
2012-01-01
Background The auditory efferent system has unique neuroanatomical pathways that connect the cerebral cortex with sensory receptor cells. Pyramidal neurons located in layers V and VI of the primary auditory cortex constitute descending projections to the thalamus, inferior colliculus, and even directly to the superior olivary complex and to the cochlear nucleus. Efferent pathways are connected to the cochlear receptor by the olivocochlear system, which innervates outer hair cells and auditory nerve fibers. The functional role of the cortico-olivocochlear efferent system remains debated. We hypothesized that auditory cortex basal activity modulates cochlear and auditory-nerve afferent responses through the efferent system. Methodology/Principal Findings Cochlear microphonics (CM), auditory-nerve compound action potentials (CAP) and auditory cortex evoked potentials (ACEP) were recorded in twenty anesthetized chinchillas, before, during and after auditory cortex deactivation by two methods: lidocaine microinjections or cortical cooling with cryoloops. Auditory cortex deactivation induced a transient reduction in ACEP amplitudes in fifteen animals (deactivation experiments) and a permanent reduction in five chinchillas (lesion experiments). We found significant changes in the amplitude of CM in both types of experiments, being the most common effect a CM decrease found in fifteen animals. Concomitantly to CM amplitude changes, we found CAP increases in seven chinchillas and CAP reductions in thirteen animals. Although ACEP amplitudes were completely recovered after ninety minutes in deactivation experiments, only partial recovery was observed in the magnitudes of cochlear responses. Conclusions/Significance These results show that blocking ongoing auditory cortex activity modulates CM and CAP responses, demonstrating that cortico-olivocochlear circuits regulate auditory nerve and cochlear responses through a basal efferent tone. The diversity of the obtained effects suggests that there are at least two functional pathways from the auditory cortex to the cochlea. PMID:22558383
Cochlear implantation in Waardenburg syndrome: The Indian scenario.
Deka, Ramesh Chandra; Sikka, Kapil; Chaturvedy, Gaurav; Singh, Chirom Amit; Venkat Karthikeyan, C; Kumar, Rakesh; Agarwal, Shivani
2010-10-01
Children with Waardenburg syndrome (WS) exhibiting normal inner ear anatomy, like those included in our cohort, derive significant benefit from cochlear implantation and results are comparable to those reported for the general population of implanted children. The patient population of WS accounts for approximately 2% of congenitally deaf children. The purpose of this retrospective case review was to describe the outcomes for those children with WS who have undergone cochlear implantation. On retrospective chart review, there were four cases with WS who underwent cochlear implantation. These cases were assessed for age at implantation, clinical and radiological features, operative and perioperative course, and performance outcomes. Auditory perception and speech production ability were evaluated using categories of auditory performance (CAP), meaningful auditory integration scales (MAIS), and speech intelligibility rating (SIR) during the follow-up period. In this group of children with WS, with a minimum follow-up of 12 months, the CAP score ranged from 3 to 5, MAIS from 25 to 30, and SIR was 3. These scores are comparable with those of other cochlear implantees.
Santarelli, Rosamaria; Starr, Arnold; Michalewski, Henry J; Arslan, Edoardo
2008-05-01
Transtympanic electrocochleography (ECochG) was recorded bilaterally in children and adults with auditory neuropathy (AN) to evaluate receptor and neural generators. Test stimuli were clicks from 60 to 120dB p.e. SPL. Measures obtained from eight AN subjects were compared to 16 normally hearing children. Receptor cochlear microphonics (CMs) in AN were of normal or enhanced amplitude. Neural compound action potentials (CAPs) and receptor summating potentials (SPs) were identified in five AN ears. ECochG potentials in those ears without CAPs were of negative polarity and of normal or prolonged duration. We used adaptation to rapid stimulus rates to distinguish whether the generators of the negative potentials were of neural or receptor origin. Adaptation in controls resulted in amplitude reduction of CAP twice that of SP without affecting the duration of ECochG potentials. In seven AN ears without CAP and with prolonged negative potential, adaptation was accompanied by reduction of both amplitude and duration of the negative potential to control values consistent with neural generation. In four ears without CAP and with normal duration potentials, adaptation was without effect consistent with receptor generation. In five AN ears with CAP, there was reduction in amplitude of CAP and SP as controls but with a significant decrease in response duration. Three patterns of cochlear potentials were identified in AN: (1) presence of receptor SP without CAP consistent with pre-synaptic disorder of inner hair cells; (2) presence of both SP and CAP consistent with post-synaptic disorder of proximal auditory nerve; (3) presence of prolonged neural potentials without a CAP consistent with post-synaptic disorder of nerve terminals. Cochlear potential measures may identify pre- and post-synaptic disorders of inner hair cells and auditory nerves in AN.
Wu, Chunxiao; Huang, Lexing; Tan, Hui; Wang, Yanting; Zheng, Hongyi; Kong, Lingmei; Zheng, Wenbin
2016-05-15
Our objective was to evaluate age-dependent changes in microstructure and metabolism in the auditory neural pathway, of children with profound sensorineural hearing loss (SNHL), and to differentiate between good and poor surgical outcome cochlear implantation (CI) patients by using diffusion tensor imaging (DTI) and magnetic resonance spectroscopy (MRS). Ninety-two SNHL children (49 males, 43 females; mean age, 4.9 years) were studied by conventional MR imaging, DTI and MRS. Patients were divided into three groups: Group A consisted of children≤1 years old (n=20), Group B consisted of children 1-3 years old (n=31), and group C consisted of children 3-14 years old (n=41). Among the 31 patients (19 males and 12 females, 12m- 14y ) with CI, 18 patients (mean age 4.8±0.7 years) with a categories of auditory performance (CAP) score over five were classified into the good outcome group and 13 patients (mean age, 4.4±0.7 years) with a CAP score below five were classified into the poor outcome group. Two DTI parameters, fractional anisotropy (FA) and apparent diffusion coefficient (ADC), were measured in the superior temporal gyrus (STG) and auditory radiation. Regions of interest for metabolic change measurements were located inside the STG. DTI values were measured based on region-of-interest analysis and MRS values for correlation analysis with CAP scores. Compared with healthy individuals, 92 SNHL patients displayed decreased FA values in the auditory radiation and STG (p<0.05). Only decreased FA values in the auditory radiation was observed in Group A. Decreased FA values in the auditory radiation and STG were both observed in B and C groups. However, in Group C, the N-acetyl aspartate/creatinine ratio in the STG was also significantly decreased (p<0.05). Correlation analyses at 12 months post-operation revealed strong correlations between the FA, in the auditory radiation, and CAP scores (r=0.793, p<0.01). DTI and MRS can be used to evaluate microstructural alterations and metabolite concentration changes in the auditory neural pathway that are not detectable by conventional MR imaging. The observed changes in FA suggest that children with SNHL have a developmental delay in myelination in the auditory neural pathway, and it also display greater metabolite concentration changes in the auditory cortex in older children, suggest that early cochlear implantation might be more effective in restoring hearing in children with SNHL. This article is part of a Special Issue entitled SI: Brain and Memory. Copyright © 2014 Elsevier B.V. All rights reserved.
Earl, Brian R.; Chertoff, Mark E.
2012-01-01
Future implementation of regenerative treatments for sensorineural hearing loss may be hindered by the lack of diagnostic tools that specify the target(s) within the cochlea and auditory nerve for delivery of therapeutic agents. Recent research has indicated that the amplitude of high-level compound action potentials (CAPs) is a good predictor of overall auditory nerve survival, but does not pinpoint the location of neural damage. A location-specific estimate of nerve pathology may be possible by using a masking paradigm and high-level CAPs to map auditory nerve firing density throughout the cochlea. This initial study in gerbil utilized a high-pass masking paradigm to determine normative ranges for CAP-derived neural firing density functions using broadband chirp stimuli and low-frequency tonebursts, and to determine if cochlear outer hair cell (OHC) pathology alters the distribution of neural firing in the cochlea. Neural firing distributions for moderate-intensity (60 dB pSPL) chirps were affected by OHC pathology whereas those derived with high-level (90 dB pSPL) chirps were not. These results suggest that CAP-derived neural firing distributions for high-level chirps may provide an estimate of auditory nerve survival that is independent of OHC pathology. PMID:22280596
Identifying auditory attention with ear-EEG: cEEGrid versus high-density cap-EEG comparison
NASA Astrophysics Data System (ADS)
Bleichner, Martin G.; Mirkovic, Bojana; Debener, Stefan
2016-12-01
Objective. This study presents a direct comparison of a classical EEG cap setup with a new around-the-ear electrode array (cEEGrid) to gain a better understanding of the potential of ear-centered EEG. Approach. Concurrent EEG was recorded from a classical scalp EEG cap and two cEEGrids that were placed around the left and the right ear. Twenty participants performed a spatial auditory attention task in which three sound streams were presented simultaneously. The sound streams were three seconds long and differed in the direction of origin (front, left, right) and the number of beats (3, 4, 5 respectively), as well as the timbre and pitch. The participants had to attend to either the left or the right sound stream. Main results. We found clear attention modulated ERP effects reflecting the attended sound stream for both electrode setups, which agreed in morphology and effect size. A single-trial template matching classification showed that the direction of attention could be decoded significantly above chance (50%) for at least 16 out of 20 participants for both systems. The comparably high classification results of the single trial analysis underline the quality of the signal recorded with the cEEGrids. Significance. These findings are further evidence for the feasibility of around the-ear EEG recordings and demonstrate that well described ERPs can be measured. We conclude that concealed behind-the-ear EEG recordings can be an alternative to classical cap EEG acquisition for auditory attention monitoring.
Identifying auditory attention with ear-EEG: cEEGrid versus high-density cap-EEG comparison.
Bleichner, Martin G; Mirkovic, Bojana; Debener, Stefan
2016-12-01
This study presents a direct comparison of a classical EEG cap setup with a new around-the-ear electrode array (cEEGrid) to gain a better understanding of the potential of ear-centered EEG. Concurrent EEG was recorded from a classical scalp EEG cap and two cEEGrids that were placed around the left and the right ear. Twenty participants performed a spatial auditory attention task in which three sound streams were presented simultaneously. The sound streams were three seconds long and differed in the direction of origin (front, left, right) and the number of beats (3, 4, 5 respectively), as well as the timbre and pitch. The participants had to attend to either the left or the right sound stream. We found clear attention modulated ERP effects reflecting the attended sound stream for both electrode setups, which agreed in morphology and effect size. A single-trial template matching classification showed that the direction of attention could be decoded significantly above chance (50%) for at least 16 out of 20 participants for both systems. The comparably high classification results of the single trial analysis underline the quality of the signal recorded with the cEEGrids. These findings are further evidence for the feasibility of around the-ear EEG recordings and demonstrate that well described ERPs can be measured. We conclude that concealed behind-the-ear EEG recordings can be an alternative to classical cap EEG acquisition for auditory attention monitoring.
Yang, Ying; Liu, Yue-Hui; Fu, Ming-Fu; Li, Chun-Lin; Wang, Li-Yan; Wang, Qi; Sun, Xi-Bin
2015-08-20
Early auditory and speech development in home-based early intervention of infants and toddlers with hearing loss younger than 2 years are still spare in China. This study aimed to observe the development of auditory and speech in deaf infants and toddlers who were fitted with hearing aids and/or received cochlear implantation between the chronological ages of 7-24 months, and analyze the effect of chronological age and recovery time on auditory and speech development in the course of home-based early intervention. This longitudinal study included 55 hearing impaired children with severe and profound binaural deafness, who were divided into Group A (7-12 months), Group B (13-18 months) and Group C (19-24 months) based on the chronological age. Categories auditory performance (CAP) and speech intelligibility rating scale (SIR) were used to evaluate auditory and speech development at baseline and 3, 6, 9, 12, 18, and 24 months of habilitation. Descriptive statistics were used to describe demographic features and were analyzed by repeated measures analysis of variance. With 24 months of hearing intervention, 78% of the patients were able to understand common phrases and conversation without lip-reading, 96% of the patients were intelligible to a listener. In three groups, children showed the rapid growth of trend features in each period of habilitation. CAP and SIR scores have developed rapidly within 24 months after fitted auxiliary device in Group A, which performed much better auditory and speech abilities than Group B (P < 0.05) and Group C (P < 0.05). Group B achieved better results than Group C, whereas no significant differences were observed between Group B and Group C (P > 0.05). The data suggested the early hearing intervention and home-based habilitation benefit auditory and speech development. Chronological age and recovery time may be major factors for aural verbal outcomes in hearing impaired children. The development of auditory and speech in hearing impaired children may be relatively crucial in thefirst year's habilitation after fitted with the auxiliary device.
NASA Astrophysics Data System (ADS)
Bohórquez, Jorge; Özdamar, Özcan; Morawski, Krzysztof; Telischi, Fred F.; Delgado, Rafael E.; Yavuz, Erdem
2005-06-01
A system capable of comprehensive and detailed monitoring of the cochlea and the auditory nerve during intraoperative surgery was developed. The cochlear blood flow (CBF) and the electrocochleogram (ECochGm) were recorded at the round window (RW) niche using a specially designed otic probe. The ECochGm was further processed to obtain cochlear microphonics (CM) and compound action potentials (CAP).The amplitude and phase of the CM were used to quantify the activity of outer hair cells (OHC); CAP amplitude and latency were used to describe the auditory nerve and the synaptic activity of the inner hair cells (IHC). In addition, concurrent monitoring with a second electrophysiological channel was achieved by recording compound nerve action potential (CNAP) obtained directly from the auditory nerve. Stimulation paradigms, instrumentation and signal processing methods were developed to extract and differentiate the activity of the OHC and the IHC in response to three different frequencies. Narrow band acoustical stimuli elicited CM signals indicating mainly nonlinear operation of the mechano-electrical transduction of the OHCs. Special envelope detectors were developed and applied to the ECochGm to extract the CM fundamental component and its harmonics in real time. The system was extensively validated in experimental animal surgeries by performing nerve compressions and manipulations.
Contralateral Inhibition of Click- and Chirp-Evoked Human Compound Action Potentials
Smith, Spencer B.; Lichtenhan, Jeffery T.; Cone, Barbara K.
2017-01-01
Cochlear outer hair cells (OHC) receive direct efferent feedback from the caudal auditory brainstem via the medial olivocochlear (MOC) bundle. This circuit provides the neural substrate for the MOC reflex, which inhibits cochlear amplifier gain and is believed to play a role in listening in noise and protection from acoustic overexposure. The human MOC reflex has been studied extensively using otoacoustic emissions (OAE) paradigms; however, these measurements are insensitive to subsequent “downstream” efferent effects on the neural ensembles that mediate hearing. In this experiment, click- and chirp-evoked auditory nerve compound action potential (CAP) amplitudes were measured electrocochleographically from the human eardrum without and with MOC reflex activation elicited by contralateral broadband noise. We hypothesized that the chirp would be a more optimal stimulus for measuring neural MOC effects because it synchronizes excitation along the entire length of the basilar membrane and thus evokes a more robust CAP than a click at low to moderate stimulus levels. Chirps produced larger CAPs than clicks at all stimulus intensities (50–80 dB ppeSPL). MOC reflex inhibition of CAPs was larger for chirps than clicks at low stimulus levels when quantified both in terms of amplitude reduction and effective attenuation. Effective attenuation was larger for chirp- and click-evoked CAPs than for click-evoked OAEs measured from the same subjects. Our results suggest that the chirp is an optimal stimulus for evoking CAPs at low stimulus intensities and for assessing MOC reflex effects on the auditory nerve. Further, our work supports previous findings that MOC reflex effects at the level of the auditory nerve are underestimated by measures of OAE inhibition. PMID:28420960
Milner, Rafał; Lewandowska, Monika; Ganc, Małgorzata; Włodarczyk, Elżbieta; Grudzień, Diana; Skarżyński, Henryk
2018-01-01
In this study, we showed an abnormal resting-state quantitative electroencephalogram (QEEG) pattern in children with central auditory processing disorder (CAPD). Twenty-seven children (16 male, 11 female; mean age = 10.7 years) with CAPD and no symptoms of other developmental disorders, as well as 23 age- and sex-matched, typically developing children (TDC, 11 male, 13 female; mean age = 11.8 years) underwent examination of central auditory processes (CAPs) and QEEG evaluation consisting of two randomly presented blocks of “Eyes Open” (EO) or “Eyes Closed” (EC) recordings. Significant correlations between individual frequency band powers and CAP tests performance were found. The QEEG studies revealed that in CAPD relative to TDC there was no effect of decreased delta absolute power (1.5–4 Hz) in EO compared to the EC condition. Furthermore, children with CAPD showed increased theta power (4–8 Hz) in the frontal area, a tendency toward elevated theta power in EO block, and reduced low-frequency beta power (12–15 Hz) in the bilateral occipital and the left temporo-occipital regions for both EO and EC conditions. Decreased middle-frequency beta power (15–18 Hz) in children with CAPD was observed only in the EC block. The findings of the present study suggest that QEEG could be an adequate tool to discriminate children with CAPD from normally developing children. Correlation analysis shows relationship between the individual EEG resting frequency bands and the CAPs. Increased power of slow waves and decreased power of fast rhythms could indicate abnormal functioning (hypoarousal of the cortex and/or an immaturity) of brain areas not specialized in auditory information processing.
Beneficial auditory and cognitive effects of auditory brainstem implantation in children.
Colletti, Liliana
2007-09-01
This preliminary study demonstrates the development of hearing ability and shows that there is a significant improvement in some cognitive parameters related to selective visual/spatial attention and to fluid or multisensory reasoning, in children fitted with auditory brainstem implantation (ABI). The improvement in cognitive paramenters is due to several factors, among which there is certainly, as demonstrated in the literature on a cochlear implants (CIs), the activation of the auditory sensory canal, which was previously absent. The findings of the present study indicate that children with cochlear or cochlear nerve abnormalities with associated cognitive deficits should not be excluded from ABI implantation. The indications for ABI have been extended over the last 10 years to adults with non-tumoral (NT) cochlear or cochlear nerve abnormalities that cannot benefit from CI. We demonstrated that the ABI with surface electrodes may provide sufficient stimulation of the central auditory system in adults for open set speech recognition. These favourable results motivated us to extend ABI indications to children with profound hearing loss who were not candidates for a CI. This study investigated the performances of young deaf children undergoing ABI, in terms of their auditory perceptual development and their non-verbal cognitive abilities. In our department from 2000 to 2006, 24 children aged 14 months to 16 years received an ABI for different tumour and non-tumour diseases. Two children had NF2 tumours. Eighteen children had bilateral cochlear nerve aplasia. In this group, nine children had associated cochlear malformations, two had unilateral facial nerve agenesia and two had combined microtia, aural atresia and middle ear malformations. Four of these children had previously been fitted elsewhere with a CI with no auditory results. One child had bilateral incomplete cochlear partition (type II); one child, who had previously been fitted unsuccessfully elsewhere with a CI, had auditory neuropathy; one child showed total cochlear ossification bilaterally due to meningitis; and one child had profound hearing loss with cochlear fractures after a head injury. Twelve of these children had multiple associated psychomotor handicaps. The retrosigmoid approach was used in all children. Intraoperative electrical auditory brainstem responses (EABRs) and postoperative EABRs and electrical middle latency responses (EMLRs) were performed. Perceptual auditory abilities were evaluated with the Evaluation of Auditory Responses to Speech (EARS) battery - the Listening Progress Profile (LIP), the Meaningful Auditory Integration Scale (MAIS), the Meaningful Use of Speech Scale (MUSS) - and the Category of Auditory Performance (CAP). Cognitive evaluation was performed on seven children using the Leiter International Performance Scale - Revised (LIPS-R) test with the following subtests: Figure ground, Form completion, Sequential order and Repeated pattern. No postoperative complications were observed. All children consistently used their devices for >75% of waking hours and had environmental sound awareness and utterance of words and simple sentences. Their CAP scores ranged from 1 to 7 (average =4); with MAIS they scored 2-97.5% (average =38%); MUSS scores ranged from 5 to 100% (average =49%) and LIP scores from 5 to 100% (average =45%). Owing to associated disabilities, 12 children were given other therapies (e.g. physical therapy and counselling) in addition to speech and aural rehabilitation therapy. Scores for two of the four subtests of LIPS-R in this study increased significantly during the first year of auditory brainstem implant use in all seven children selected for cognitive evaluation.
Song, Mee Hyun; Bae, Mi Ran; Kim, Hee Nam; Lee, Won-Sang; Yang, Won Sun; Choi, Jae Young
2010-08-01
Cochlear implantation in patients with narrow internal auditory canal (IAC) can result in variable outcomes; however, preoperative evaluations have limitations in accurately predicting outcomes. In this study, we analyzed the outcomes of cochlear implantation in patients with narrow IAC and correlated the intracochlear electrically evoked auditory brainstem response (EABR) findings to postoperative performance to determine the prognostic significance of intracochlear EABR. Retrospective case series at a tertiary hospital. Thirteen profoundly deaf patients with narrow IAC who received cochlear implantation from 2002 to 2008 were included in this study. Postoperative performance was evaluated after at least 12 months of follow-up, and postoperative intracochlear EABR was measured to determine its correlation with outcome. The clinical significance of electrically evoked compound action potential (ECAP) was also analyzed. Patients with narrow IAC showed postoperative auditory performances ranging from CAP 0 to 4 after cochlear implantation. Intracochlear EABR measured postoperatively demonstrated prognostic value in the prediction of long-term outcomes, whereas ECAP measurements failed to show a significant correlation with outcome. Consistent with the advantages of intracochlear EABR over extracochlear EABR, this study demonstrates that intracochlear EABR has prognostic significance in predicting long-term outcomes in patients with narrow IAC. Intracochlear EABR measured either intraoperatively or in the early postoperative period may play an important role in deciding whether to continue with auditory rehabilitation using a cochlear implant or to switch to an auditory brainstem implant so as not to miss the optimal timing for language development.
Zhang, X Y; Liang, M J; Liu, J H; Li, X H; Zhen, Y Q; Weng, Y L
2017-04-20
Objective: To investigatethe effection of white matter abnormality to auditory and speech rehabilitation after cochlear implantation in prelingual deafness children. Method: Thirty-five children with white matter abnormality were included in this study. The degree of leukoaraiosis was evaluated by Scheltens scale based on MRI.The hearing and speechrecovery level was rated by auditory behavior grading standards(CAP) and speech intelligibility grading standards(SIR) at 6 months, 12 months, and 24 months post operation. Result: The CAP scores and SIR scores of the children with white matter abnormality were lower than those of the control group at 6 months after operation ( P <0.05).The SIR scores of the children with white matter abnormality at 12 months and 24 months post operation were significantly lower than those of the control group.There was no statistically significant difference between the CAP scores of the two groups at 12 and 24 months after operation( P >0.05).Schelten classification had a greater impact on SIR scores than on CAP scores. Conclusion: The effect of white matter abnormality on auditory and speech rehabilitation after cochlear implantation was related to the degree of leukoencephalopathy. When the lesion of white matter abnormality was larger, the level of hearing and verbal rehabilitation was lower, and the speech rehabilitation was more significantly impacted by white matter lesions degree. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.
Central auditory processing and migraine: a controlled study.
Agessi, Larissa Mendonça; Villa, Thaís Rodrigues; Dias, Karin Ziliotto; Carvalho, Deusvenir de Souza; Pereira, Liliane Desgualdo
2014-11-08
This study aimed to verify and compare central auditory processing (CAP) performance in migraine with and without aura patients and healthy controls. Forty-one volunteers of both genders, aged between 18 and 40 years, diagnosed with migraine with and without aura by the criteria of "The International Classification of Headache Disorders" (ICDH-3 beta) and a control group of the same age range and with no headache history, were included. Gaps-in-noise (GIN), Duration Pattern test (DPT) and Dichotic Digits Test (DDT) tests were used to assess central auditory processing performance. The volunteers were divided into 3 groups: Migraine with aura (11), migraine without aura (15), and control group (15), matched by age and schooling. Subjects with aura and without aura performed significantly worse in GIN test for right ear (p = .006), for left ear (p = .005) and for DPT test (p < .001) when compared with controls without headache, however no significant differences were found in the DDT test for the right ear (p = .362) and for the left ear (p = .190). Subjects with migraine performed worsened in auditory gap detection, in the discrimination of short and long duration. They also presented impairment in the physiological mechanism of temporal processing, especially in temporal resolution and temporal ordering when compared with controls. Migraine could be related to an impaired central auditory processing. Research Ethics Committee (CEP 0480.10) - UNIFESP.
Hashemi, Sayed Basir; Rajaeefard, Abdolreza; Norouzpour, Hasan; Tabatabaee, Hamid Reza; Monshizadeh, Leila
2013-03-01
Hearing loss is the most common sensorineural deficiency in human beings. Cochlear implantation is introduced worldwide to treat the severe to profound sensorineural hearing loss, and can result in both speech comprehension and production. The present study aims to determine the effect of cochlear implantation on the improvement of the auditory performance in 2-7 years old children. The present follow-up study is a kind of cohort study which was conducted on 98 children between 2-7 years old who had referred to Fars Cochlear Implantation Center. The patients' information was gathered from their profiles both before and after the operation. The auditory performance score was obtained in 3 stages; 6 months, 1 year, and 2 years after the cochlear implantation through the Cap test. The data was analyzed using the nonparametric Friedman test as well as Mann-Withney, Kruskal-Wallis, and Spearman's Ranks Correlation coefficients. The mean and the median of the auditory performance score of the children who had undergone the cochlear implantation revealed a significant improvement from 6 months to 1 year, and 2 years after the implantation. It showed a significant statistical association between implantation age, type of hearing loss, regular reference, and the length of being present in the rehabilitation program with the auditory performance. It showed no significant association between sex, mother's level of education, being monolingual or bilingual, and family size with the auditory performance. This study revealed that the type of hearing loss, Presence in the rehabilitation program, and the age of cochlear implantation can be major prognostic factors of the response to the treatment, then the country's health policy makers and health planners must executively take into account the infants' hearing screening program during the first 6 month of age.
Central auditory processing and migraine: a controlled study
2014-01-01
Background This study aimed to verify and compare central auditory processing (CAP) performance in migraine with and without aura patients and healthy controls. Methods Forty-one volunteers of both genders, aged between 18 and 40 years, diagnosed with migraine with and without aura by the criteria of “The International Classification of Headache Disorders” (ICDH-3 beta) and a control group of the same age range and with no headache history, were included. Gaps-in-noise (GIN), Duration Pattern test (DPT) and Dichotic Digits Test (DDT) tests were used to assess central auditory processing performance. Results The volunteers were divided into 3 groups: Migraine with aura (11), migraine without aura (15), and control group (15), matched by age and schooling. Subjects with aura and without aura performed significantly worse in GIN test for right ear (p = .006), for left ear (p = .005) and for DPT test (p < .001) when compared with controls without headache, however no significant differences were found in the DDT test for the right ear (p = .362) and for the left ear (p = .190). Conclusions Subjects with migraine performed worsened in auditory gap detection, in the discrimination of short and long duration. They also presented impairment in the physiological mechanism of temporal processing, especially in temporal resolution and temporal ordering when compared with controls. Migraine could be related to an impaired central auditory processing. Clinical trial registration Research Ethics Committee (CEP 0480.10) – UNIFESP PMID:25380661
Illg, Angelika; Haack, Marius; Lesinski-Schiedat, Anke; Büchner, Andreas; Lenarz, Thomas
To document the long-term outcomes of auditory performance, educational status, vocational training, and occupational situation in users of cochlear implants (CIs) who were implanted in childhood. This retrospective cross-sectional study of 933 recipients of CIs examined auditory performance, education and vocational training, and occupational outcomes. All participants received their first CI during their childhood between 1986 and 2000. Speech comprehension results were categorized using the categories of auditory performance (CAP) arranged in order of increasing difficulty ranging from 0 to 8. 174 of the 933 pediatric recipients of CIs completed a self-assessment questionnaire regarding their education and occupational outcomes. To measure and compare school education, qualifications were converted into International Standard Classification of Education levels (ISCED-97). Occupations were converted into International Standard Classification of Occupation-88 skill levels. Data from the German General Social Survey (Allgemeine Bevölkerungsumfrage der Sozialwissenschaften/ALLBUS) for 2012 were used as a basis for comparing some of the collected data with the general population in Germany. The results showed that 86.8% of the 174 participants who completed the survey used their devices more than 11 hr per day. Only 2% of the surveyed individuals were nonusers. Median CAP was 4.00 (0 to 8). Age at implantation was significantly correlated with CAP level (r = -0.472; p < 0.001). The mean ISCED level of the 174 surveyed recipients was 2.24 (SD = 0.59; range: 1 to 3). A significant difference (p = 0.001) between users' ISCED levels and those of respondents was found. Participants' ISCED levels and maternal educational levels were significantly correlated (r = 0.271; p = 0.008). The International Standard Classification of Occupation-88 skill levels were as follows: 5% achieved skill level 1; 77% skill level 2; 16% skill level 3; and 5% skill level 4. The average skill level achieved was 2.24 (range 1 to 4; SD = 0.57) which was significantly poorer (t(127) = 4.886; p = 0.001) than the mean skill level of the respondents (mean = 2.54; SD = 0.85). Data collection up to 17.75 (SD = 3.08; range 13 to 28) years post implant demonstrated that the majority of participants who underwent implantation at an early age achieved discrimination of speech sounds without lipreading (CAP category 4.00). Educational, vocational, and occupational level achieved by this cohort were significantly poorer compared with the German and worldwide population average. Children implanted today who are younger at implantation, and with whom more advanced up-to-date CIs are used, are expected to exhibit better auditory performance and have enhanced educational and occupational opportunities. Compared with the circumstances immediately after World War II in the 20th century, children with hearing impairment who use these implants have improved prospects in this regard.
Hwang, Chung-Feng; Ko, Hui-Chen; Tsou, Yung-Ting; Chan, Kai-Chieh; Fang, Hsuan-Yeh; Wu, Che-Ming
2016-01-01
Objectives. We evaluated the causes, hearing, and speech performance before and after cochlear implant reimplantation in Mandarin-speaking users. Methods. In total, 589 patients who underwent cochlear implantation in our medical center between 1999 and 2014 were reviewed retrospectively. Data related to demographics, etiologies, implant-related information, complications, and hearing and speech performance were collected. Results. In total, 22 (3.74%) cases were found to have major complications. Infection (n = 12) and hard failure of the device (n = 8) were the most common major complications. Among them, 13 were reimplanted in our hospital. The mean scores of the Categorical Auditory Performance (CAP) and the Speech Intelligibility Rating (SIR) obtained before and after reimplantation were 5.5 versus 5.8 and 3.7 versus 4.3, respectively. The SIR score after reimplantation was significantly better than preoperation. Conclusions. Cochlear implantation is a safe procedure with low rates of postsurgical revisions and device failures. The Mandarin-speaking patients in this study who received reimplantation had restored auditory performance and speech intelligibility after surgery. Device soft failure was rare in our series, calling attention to Mandarin-speaking CI users requiring revision of their implants due to undesirable symptoms or decreasing performance of uncertain cause. PMID:27413753
Markessis, Emily; Poncelet, Luc; Colin, Cécile; Hoonhorst, Ingrid; Collet, Grégory; Deltenre, Paul; Moore, Brian C J
2010-06-01
Auditory steady-state evoked potential (ASSEP) tuning curves were compared to compound action potential (CAP) tuning curves, both measured at 2 Hz, using sedated beagle puppies. The effect of two types of masker (narrowband noise and sinusoidal) on the tuning curve parameters was assessed. Whatever the masker type, CAP tuning curve parameters were qualitatively and quantitatively similar to the ASSEP ones, with a similar inter-subject variability, but with a greater incidence of upward tip displacement. Whatever the procedure, sinusoidal maskers produced sharper tuning curves than narrow-band maskers. Although these differences are not likely to have significant implications for clinical work, from a fundamental point of view, their origin requires further investigations. The same amount of time was needed to record a CAP and an ASSEP 13-point tuning curve. The data further validate the ASSEP technique, which has the advantages of having a smaller tendency to produce upward tip shifts than the CAP technique. Moreover, being non invasive, ASSEP tuning curves can be easily repeated over time in the same subject for clinical and research purposes.
Chang, Young-Soo; Moon, Il Joon; Kim, Eun Yeon; Ahn, Jungmin; Chung, Won-Ho; Cho, Yang-Sun; Hong, Sung Hwa
2015-02-01
Preoperative evaluation of social interaction and global development levels using the Vineland Social Maturity Scale (VSMS) and Bayley Scales of Infant Development-2nd edition (BSID-II) may be beneficial in predicting the postoperative outcome in pediatric cochlear implant recipients. In particular, cautious preoperative counseling regarding the poor postoperative prognosis may be necessary in children with low social skills and developmental status. To determine the clinical benefit of preoperative evaluation of social interaction and global development levels using VSMS and BSID-II in predicting the postoperative outcome in pediatric cochlear implant recipients. A total of 65 deaf children who underwent cochlear implantation (CI) were included in this study. Age at the time of implantation ranged from 12 to 76 months. All of the children underwent a comprehensive preimplant psychological assessment by a clinical psychologist. The VSMS and BSID-II were used for evaluating social skills and a child's development preoperatively. A social quotient (SQ) was calculated by using the VSMS for each subject using the following formula: (social age/chronological age) × 100. The auditory perception and speech production abilities were evaluated using the Categories of Auditory Performance (CAP) scale and the Korean version of the Ling's stage (K-Ling), respectively, at 1 year after CI. The associations between the preoperative SQ/developmental levels and the postoperative auditory/speech outcomes were evaluated. The mean SQ was significantly decreased in the enrolled children (90.6 ± 26.1). The improvement in CAP score at 1 year after CI was correlated with preoperative SQ. The improvements in phonemic and phonologic levels of K-Ling were correlated with preoperative VSMS and BSID-II scores.
Pianesi, Federica; Scorpecci, Alessandro; Giannantonio, Sara; Micardi, Mariella; Resca, Alessandra; Marsella, Pasquale
2016-03-01
To assess when prelingually deaf children with a cochlear implant (CI) achieve the First Milestone of Oral Language, to study the progression of their prelingual auditory skills in the first year after CI and to investigate a possible correlation between such skills and the timing of initial oral language development. The sample included 44 prelingually deaf children (23 M and 21 F) from the same tertiary care institution, who received unilateral or bilateral cochlear implants. Achievement of the First Milestone of Oral Language (FMOL) was defined as speech comprehension of at least 50 words and speech production of a minimum of 10 words, as established by administration of a validated Italian test for the assessment of initial language competence in infants. Prelingual auditory-perceptual skills were assessed over time by means of a test battery consisting of: the Infant Toddler Meaningful Integration Scale (IT-MAIS); the Infant Listening Progress Profile (ILiP) and the Categories of Auditory Performance (CAP). On average, the 44 children received their CI at 24±9 months and experienced FMOL after 8±4 months of continuous CI use. The IT-MAIS, ILiP and CAP scores increased significantly over time, the greatest improvement occurring between baseline and six months of CI use. On multivariate regression analysis, age at diagnosis and age at CI did not appear to bear correlation with FMOL timing; instead, the only variables contributing to its variance were IT-MAIS and ILiP scores after six months of CI use, accounting for 43% and 55%, respectively. Prelingual auditory skills of implanted children assessed via a test battery six months after CI treatment, can act as indicators of the timing of initial oral language development. Accordingly, the period from CI switch-on to six months can be considered as a window of opportunity for appropriate intervention in children failing to show the expected progression of their auditory skills and who would have higher risk of delayed oral language development. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Cochlear implantation outcomes in children with common cavity deformity; a retrospective study.
Zhang, Li; Qiu, Jianxin; Qin, Feifei; Zhong, Mei; Shah, Gyanendra
2017-09-01
A common cavity deformity (CCD) is a deformed inner ear in which the cochlea and vestibule are confluent forming a common rudimentary cystic cavity that results in profound hearing loss. There are few studies paying attention to common cavity. Our group is engrossed in observing the improvement of auditory and verbal abilities in children who have received cochlear implantation (CI), and comparing these targets between children with common cavity and normal inner ear structure. A retrospective study was conducted in 12 patients with profound hearing loss that were divided into a common cavity group and a control group, six in each group matched in sex, age and time of implantation, based on inner ear structure. Categories of Auditory Performance (CAP) and speech intelligibility rating (SIR) scores and aided hearing thresholds were collected and compared between the two groups. All patients wore CI for more than 1 year at the Cochlear Center of Anhui Medical University from 2011 to 2015. Postoperative CAP and SIR scores were higher than before operation in both groups (p < 0.05), although the scores were lower in the CCD group than in the control group (p < 0.05). The aided threshold was also lower in the control group than in the CCD group (p < 0.05). Even though audiological improvement in children with CCD was not as good as in those without CCD, CI provides benefits in auditory perception and communication skills in these children.
The auditory nerve overlapped waveform (ANOW): A new objective measure of low-frequency hearing
NASA Astrophysics Data System (ADS)
Lichtenhan, Jeffery T.; Salt, Alec N.; Guinan, John J.
2015-12-01
One of the most pressing problems today in the mechanics of hearing is to understand the mechanical motions in the apical half of the cochlea. Almost all available measurements from the cochlear apex of basilar membrane or other organ-of-Corti transverse motion have been made from ears where the health, or sensitivity, in the apical half of the cochlea was not known. A key step in understanding the mechanics of the cochlear base was to trust mechanical measurements only when objective measures from auditory-nerve compound action potentials (CAPs) showed good preparation sensitivity. However, such traditional objective measures are not adequate monitors of cochlear health in the very low-frequency regions of the apex that are accessible for mechanical measurements. To address this problem, we developed the Auditory Nerve Overlapped Waveform (ANOW) that originates from auditory nerve output in the apex. When responses from the round window to alternating low-frequency tones are averaged, the cochlear microphonic is canceled and phase-locked neural firing interleaves in time (i.e., overlaps). The result is a waveform that oscillates at twice the probe frequency. We have demonstrated that this Auditory Nerve Overlapped Waveform - called ANOW - originates from auditory nerve fibers in the cochlear apex [8], relates well to single-auditory-nerve-fiber thresholds, and can provide an objective estimate of low-frequency sensitivity [7]. Our new experiments demonstrate that ANOW is a highly sensitive indicator of apical cochlear function. During four different manipulations to the scala media along the cochlear spiral, ANOW amplitude changed when either no, or only small, changes occurred in CAP thresholds. Overall, our results demonstrate that ANOW can be used to monitor cochlear sensitivity of low-frequency regions during experiments that make apical basilar membrane motion measurements.
Impact of socioeconomic factors on paediatric cochlear implant outcomes.
Sharma, Shalabh; Bhatia, Khyati; Singh, Satinder; Lahiri, Asish Kumar; Aggarwal, Asha
2017-11-01
The study was aimed at evaluating the impact of certain socioeconomic factors such as family income, level of parents' education, distance between the child's home and auditory verbal therapy clinic, and age of the child at implantation on postoperative cochlear implant outcomes. Children suffering from congenital bilateral profound sensorineural hearing loss and a chronologic age of 4 years or younger at the time of implantation were included in the study. Children who were able to complete a prescribed period of a 1-year follow-up were included in the study. These children underwent cochlear implantation surgery, and their postoperative outcomes were measured and documented using categories of auditory perception (CAP), meaningful auditory integration (MAIS), and speech intelligibility rating (SIR) scores. Children were divided into three groups based on the level of parental education, family income, and distance of their home from the rehabilitation-- auditory verbal therapy clinic. A total of 180 children were studied. The age at implantation had a significant impact on the postoperative outcomes, with an inverse correlation. The younger the child's age at the time of implantation, the better were the postoperative outcomes. However, there were no significant differences among the CAP, MAIS, and SIR scores and each of the three subgroups. Children from families with an annual income of less than $7,500, between $7,500 and $15,000, and more than $15,000 performed equally well, except for significantly higher SIR scores in children with family incomes more than $15,000. Children with of parents who had attended high school or possessed a bachelor's or Master's master's degree had similar scores, with no significant difference. Also, distance from the auditory verbal therapy clinic failed to have any significantimpact on a child's performance. These results have been variable, similar to those of previously published studies. A few of the earlier studies concurred with our results, but most of the studies had suggested that children in families of higher socioeconomic status had have better speech and language acquisition. Cochlear implantation significantly improves auditory perception and speech intelligibility of children suffering from profound sensorineural hearing loss. Younger The younger the age at implantation, the better are the results. Hence, early implantation should be promoted and encouraged. Our study suggests that children who followed the designated program of postoperative mapping and auditory verbal therapy for a minimum period of 1 year seemed to do equally well in terms of hearing perception and speech intelligibility, irrespective of the socioeconomic status of the family. Further studies are essential to assess the impact of these factors on long-term speech acquisition andlanguage development. Copyright © 2017 Elsevier B.V. All rights reserved.
Liu, Yuying; Dong, Ruijuan; Li, Yuling; Xu, Tianqiu; Li, Yongxin; Chen, Xueqing; Gong, Shusheng
2014-12-01
To evaluate the auditory and speech abilities in children with auditory neuropathy spectrum disorder (ANSD) after cochlear implantation (CI) and determine the role of age at implantation. Ten children participated in this retrospective case series study. All children had evidence of ANSD. All subjects had no cochlear nerve deficiency on magnetic resonance imaging and had used the cochlear implants for a period of 12-84 months. We divided our children into two groups: children who underwent implantation before 24 months of age and children who underwent implantation after 24 months of age. Their auditory and speech abilities were evaluated using the following: behavioral audiometry, the Categories of Auditory Performance (CAP), the Meaningful Auditory Integration Scale (MAIS), the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), the Standard-Chinese version of the Monosyllabic Lexical Neighborhood Test (LNT), the Multisyllabic Lexical Neighborhood Test (MLNT), the Speech Intelligibility Rating (SIR) and the Meaningful Use of Speech Scale (MUSS). All children showed progress in their auditory and language abilities. The 4-frequency average hearing level (HL) (500Hz, 1000Hz, 2000Hz and 4000Hz) of aided hearing thresholds ranged from 17.5 to 57.5dB HL. All children developed time-related auditory perception and speech skills. Scores of children with ANSD who received cochlear implants before 24 months tended to be better than those of children who received cochlear implants after 24 months. Seven children completed the Mandarin Lexical Neighborhood Test. Approximately half of the children showed improved open-set speech recognition. Cochlear implantation is helpful for children with ANSD and may be a good optional treatment for many ANSD children. In addition, children with ANSD fitted with cochlear implants before 24 months tended to acquire auditory and speech skills better than children fitted with cochlear implants after 24 months. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Hahn, Allison H; Campbell, Kimberley A; Congdon, Jenna V; Hoang, John; McMillan, Neil; Scully, Erin N; Yong, Joshua J H; Elie, Julie E; Sturdy, Christopher B
2017-07-01
Chickadees produce a multi-note chick-a-dee call in multiple socially relevant contexts. One component of this call is the D note, which is a low-frequency and acoustically complex note with a harmonic-like structure. In the current study, we tested black-capped chickadees on a between-category operant discrimination task using vocalizations with acoustic structures similar to black-capped chickadee D notes, but produced by various songbird species, in order to examine the role that phylogenetic distance plays in acoustic perception of vocal signals. We assessed the extent to which discrimination performance was influenced by the phylogenetic relatedness among the species producing the vocalizations and by the phylogenetic relatedness between the subjects' species (black-capped chickadees) and the vocalizers' species. We also conducted a bioacoustic analysis and discriminant function analysis in order to examine the acoustic similarities among the discrimination stimuli. A previous study has shown that neural activation in black-capped chickadee auditory and perceptual brain regions is similar following the presentation of these vocalization categories. However, we found that chickadees had difficulty discriminating between forward and reversed black-capped chickadee D notes, a result that directly corresponded to the bioacoustic analysis indicating that these stimulus categories were acoustically similar. In addition, our results suggest that the discrimination between vocalizations produced by two parid species (chestnut-backed chickadees and tufted titmice) is perceptually difficult for black-capped chickadees, a finding that is likely in part because these vocalizations contain acoustic similarities. Overall, our results provide evidence that black-capped chickadees' perceptual abilities are influenced by both phylogenetic relatedness and acoustic structure.
Cochlear Implantation in Patients With CHARGE Syndrome.
Rah, Yoon Chan; Lee, Ji Young; Suh, Myung-Whan; Park, Moo Kyun; Lee, Jun Ho; Chang, Sun O; Oh, Seung-Ha
2016-11-01
To determine the optimal surgical approach for cochlear implantation (CI) preoperatively based on the spatial relation of a displaced facial nerve (FN) and middle ear structures and to analyze clinical outcomes of CHARGE syndrome. Facial nerve displacement and associated deviation of inner ear structures were analyzed in 13 patients (17 ears) with CHARGE syndrome who underwent CI. Surgical accessibility through the facial recess was assessed based on anatomical landmarks. Postoperative speech performance and associated clinical characteristics were analyzed. The most consistently identified ear anomalies were semicircular canal aplasia (100%), ossicular anomaly (100%), and vestibular hypoplasia (88%). Facial nerve displacement was found in 77% of cases (anteroinferior: 47%, anterior: 24%, inferior: 6%). The width of available surgical space around facial recess was significantly greater in cases of facial recess approach (2.85 ± 0.9 mm) than those of alternative approach (0.12 ± 0.29 mm, P = .02). Postoperatively, 53% achieved better than category 4 on the categories of auditory perception (CAP) scale. The CAP category was significantly correlated with internal auditory canal diameter (P = .025) and did not differ according to the applied surgical approach. Preoperative determination of surgical accessibility through facial recess would be useful for safe surgical approach, and successful hearing rehabilitation was achievable by applying appropriate surgical approaches. © The Author(s) 2016.
Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de
2017-12-07
To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.
Experiences from Auditory Brainstem Implantation (ABIs) in four paediatric patients.
Lundin, Karin; Stillesjö, Fredrik; Nyberg, Gunnar; Rask-Andersen, Helge
2016-01-01
Indications for auditory brainstem implants (ABIs) have been widened from patients with neurofibromatosis type 2 (NF2) to paediatric patients with congenital cochlear malformations, cochlear nerve hypoplasia/aplasia, or cochlear ossification after meningitis. We present four ABI surgeries performed in children at Uppsala University Hospital in Sweden since 2009. Three children were implanted with implants from Cochlear Ltd. (Lane Cove, Australia) and one child with an implant from MedEl GMBH (Innsbruck, Austria). A boy with Goldenhar syndrome was implanted with a Cochlear Nucleus ABI24M at age 2 years (patient 1). Another boy with CHARGE syndrome was implanted with a Cochlear Nucleus ABI541 at age 2.5 years (patient 2). Another boy with post-ossification meningitis was implanted with a Cochlear Nucleus ABI24M at age 4 years (patient 3). A girl with cochlear aplasia was implanted with a MedEl Synchrony ABI at age 3 years (patient 4). In patients 1, 2, and 3, the trans-labyrinthine approach was used, and in patient 4 the retro-sigmoid approach was used. Three of the four children benefited from their ABIs and use it full time. Two of the full time users had categories of auditory performance (CAP) score of 4 at their last follow up visit (6 and 2.5 years postoperative) which means they can discriminate consistently any combination of two of Ling's sounds. One child has not been fully evaluated yet, but is a full time user and had CAP 2 (responds to speech sounds) after 3 months of ABI use. No severe side or unpleasant stimulation effects have been observed so far. There was one case of immediate electrode migration and one case of implant device failure after 6.5 years. ABI should be considered as an option in the rehabilitation of children with similar diagnoses.
Scully, Erin N; Hahn, Allison H; Campbell, Kimberley A; McMillan, Neil; Congdon, Jenna V; Sturdy, Christopher B
2017-07-28
Zebra finches (Taeniopygia guttata) are sexually dimorphic songbirds, not only in appearance but also in vocal production: while males produce both calls and songs, females only produce calls. This dimorphism provides a means to contrast the auditory perception of vocalizations produced by songbird species of varying degrees of relatedness in a dimorphic species to that of a monomorphic species, species in which both males and females produce calls and songs (e.g., black-capped chickadees, Poecile atricapillus). In the current study, we examined neuronal expression after playback of acoustically similar hetero- and conspecific calls produced by species of differing phylogenetic relatedness to our subject species, zebra finch. We measured the immediate early gene (IEG) ZENK in two auditory areas of the forebrain (caudomedial mesopallium, CMM, and caudomedial nidopallium, NCM). We found no significant differences in ZENK expression in either male or female zebra finches regardless of playback condition. We also discuss comparisons between our results and the results of a previous study conducted by Avey et al. [1] on black-capped chickadees that used similar stimulus types. These results are consistent with the previous study which also found no significant differences in expression following playback of calls produced by various heterospecific species and conspecifics [1]. Our results suggest that, similar to black-capped chickadees, IEG expression in zebra finch CMM and NCM is tied to the acoustic similarity of vocalizations and not the phylogenetic relatedness of the species producing the vocalizations. Copyright © 2017 Elsevier B.V. All rights reserved.
Salicylate-induced cochlear impairments, cortical hyperactivity and re-tuning, and tinnitus.
Chen, Guang-Di; Stolzberg, Daniel; Lobarinas, Edward; Sun, Wei; Ding, Dalian; Salvi, Richard
2013-01-01
High doses of sodium salicylate (SS) have long been known to induce temporary hearing loss and tinnitus, effects attributed to cochlear dysfunction. However, our recent publications reviewed here show that SS can induce profound, permanent, and unexpected changes in the cochlea and central nervous system. Prolonged treatment with SS permanently decreased the cochlear compound action potential (CAP) amplitude in vivo. In vitro, high dose SS resulted in a permanent loss of spiral ganglion neurons and nerve fibers, but did not damage hair cells. Acute treatment with high-dose SS produced a frequency-dependent decrease in the amplitude of distortion product otoacoustic emissions and CAP. Losses were greatest at low and high frequencies, but least at the mid-frequencies (10-20 kHz), the mid-frequency band that corresponds to the tinnitus pitch measured behaviorally. In the auditory cortex, medial geniculate body and amygdala, high-dose SS enhanced sound-evoked neural responses at high stimulus levels, but it suppressed activity at low intensities and elevated response threshold. When SS was applied directly to the auditory cortex or amygdala, it only enhanced sound evoked activity, but did not elevate response threshold. Current source density analysis revealed enhanced current flow into the supragranular layer of auditory cortex following systemic SS treatment. Systemic SS treatment also altered tuning in auditory cortex and amygdala; low frequency and high frequency multiunit clusters up-shifted or down-shifted their characteristic frequency into the 10-20 kHz range thereby altering auditory cortex tonotopy and enhancing neural activity at mid-frequencies corresponding to the tinnitus pitch. These results suggest that SS-induced hyperactivity in auditory cortex originates in the central nervous system, that the amygdala potentiates these effects and that the SS-induced tonotopic shifts in auditory cortex, the putative neural correlate of tinnitus, arises from the interaction between the frequency-dependent losses in the cochlea and hyperactivity in the central nervous system. Copyright © 2012 Elsevier B.V. All rights reserved.
Lee, Seung Min; Kim, Jeong Hun; Byeon, Hang Jin; Choi, Yoon Young; Park, Kwang Suk; Lee, Sang-Hoon
2013-06-01
Long-term electroencephalogram (EEG) monitoring broadens EEG applications to various areas, but it requires cap-free recording of EEG signals. Our objective here is to develop a capacitive, small-sized, adhesive and biocompatible electrode for the cap-free and long-term EEG monitoring. We have developed an electrode made of polydimethylsiloxane (PDMS) and adhesive PDMS for EEG monitoring. This electrode can be attached to a hairy scalp and be completely hidden by the hair. We tested its electrical and mechanical (adhesive) properties by measuring voltage gain to frequency and adhesive force using 30 repeat cycles of the attachment and detachment test. Electrode performance on EEG was evaluated by alpha rhythm detection and measuring steady state visually evoked potential and N100 auditory evoked potential. We observed the successful recording of alpha rhythm and evoked signals to diverse stimuli with high signal quality. The biocompatibility of the electrode was verified and a survey found that the electrode was comfortable and convenient to wear. These results indicate that the proposed EEG electrode is suitable and convenient for long term EEG monitoring.
Hu, Ning; Du, Xiaoping; Li, Wei; West, Matthew B.; Choi, Chul-Hee; Floyd, Robert; Kopke, Richard D.
2017-01-01
Oxidative stress is considered a major cause of the structural and functional changes associated with auditory pathologies induced by exposure to acute acoustic trauma AAT). In the present study, we examined the otoprotective effects of 2,4-disulfophenyl-N-tert-butylnitrone (HPN-07), a nitrone-based free radical trap, on the physiological and cellular changes in the auditory system of chinchilla following a six-hour exposure to 4 kHz octave band noise at 105 dB SPL. HPN-07 has been shown to suppress oxidative stress in biological models of a variety of disorders. Our results show that administration of HPN-07 beginning four hours after acoustic trauma accelerated and enhanced auditory/cochlear functional recovery, as measured by auditory brainstem responses (ABR), distortion product otoacoustic emissions (DPOAE), compound action potentials (CAP), and cochlear microphonics (CM). The normally tight correlation between the endocochlear potential (EP) and evoked potentials of CAP and CM were persistently disrupted after noise trauma in untreated animals but returned to homeostatic conditions in HPN-07 treated animals. Histological analyses revealed several therapeutic advantages associated with HPN-07 treatment following AAT, including reductions in inner and outer hair cell loss; reductions in AAT-induced loss of calretinin-positive afferent nerve fibers in the spiral lamina; and reductions in fibrocyte loss within the spiral ligament. These findings support the conclusion that early intervention with HPN-07 following an AAT efficiently blocks the propagative ototoxic effects of oxidative stress, thereby preserving the homeostatic and functional integrity of the cochlea. PMID:28832600
Bergin, M J; Bird, P A; Vlajkovic, S M; Thorne, P R
2015-12-01
Permanent high frequency (>4 kHz) sensorineural hearing loss following middle ear surgery occurs in up to 25% of patients. The aetiology of this loss is poorly understood and may involve transmission of supra-physiological forces down the ossicular chain to the cochlea. Investigating the mechanisms of this injury using animal models is challenging, as evaluating cochlear function with evoked potentials is confounded when ossicular manipulation disrupts the normal air conduction (AC) pathway. Bone conduction (BC) using clinical bone vibrators in small animals is limited by poor transducer output at high frequencies sensitive to trauma. The objectives of the present study were firstly to evaluate a novel high frequency bone conduction transducer with evoked auditory potentials in a guinea pig model, and secondly to use this model to investigate the impact of middle ear surgical manipulation on cochlear function. We modified a magnetostrictive device as a high frequency BC transducer and evaluated its performance by comparison with a calibrated AC transducer at frequencies up to 32 kHz using the auditory brainstem response (ABR), compound action potential (CAP) and summating potential (SP). To mimic a middle ear traumatising stimulus, a rotating bur was brought in to contact with the incudomalleal complex and the effect on evoked cochlear potentials was observed. BC-evoked potentials followed the same input-output function pattern as AC potentials for all ABR frequencies. Deterioration in CAP and SP thresholds was observed after ossicular manipulation. It is possible to use high frequency BC to evoke responses from the injury sensitive basal region of the cochlea and so not rely on AC with the potential confounder of conductive hearing loss. Ongoing research explores how these findings evolve over time, and ways in which injury may be reduced and the cochlea protected during middle ear surgery. Copyright © 2015 Elsevier B.V. All rights reserved.
Click- and chirp-evoked human compound action potentials
Chertoff, Mark; Lichtenhan, Jeffery; Willis, Marie
2010-01-01
In the experiments reported here, the amplitude and the latency of human compound action potentials (CAPs) evoked from a chirp stimulus are compared to those evoked from a traditional click stimulus. The chirp stimulus was created with a frequency sweep to compensate for basilar membrane traveling wave delay using the O-Chirp equations from Fobel and Dau [(2004). J. Acoust. Soc. Am. 116, 2213–2222] derived from otoacoustic emission data. Human cochlear traveling wave delay estimates were obtained from derived compound band action potentials provided by Eggermont [(1979). J. Acoust. Soc. Am. 65, 463–470]. CAPs were recorded from an electrode placed on the tympanic membrane (TM), and the acoustic signals were monitored with a probe tube microphone attached to the TM electrode. Results showed that the amplitude and latency of chirp-evoked N1 of the CAP differed from click-evoked CAPs in several regards. For the chirp-evoked CAP, the N1 amplitude was significantly larger than the click-evoked N1s. The latency-intensity function was significantly shallower for chirp-evoked CAPs as compared to click-evoked CAPs. This suggests that auditory nerve fibers respond with more unison to a chirp stimulus than to a click stimulus. PMID:21117748
Cochlear implant rehabilitation outcomes in Waardenburg syndrome children.
de Sousa Andrade, Susana Margarida; Monteiro, Ana Rita Tomé; Martins, Jorge Humberto Ferreira; Alves, Marisa Costa; Santos Silva, Luis Filipe; Quadros, Jorge Manuel Cardoso; Ribeiro, Carlos Alberto Reis
2012-09-01
The purpose of this study was to review the outcomes of children with documented Waardenburg syndrome implanted in the ENT Department of Centro Hospitalar de Coimbra, concerning postoperative speech perception and production, in comparison to the rest of non-syndromic implanted children. A retrospective chart review was performed for children congenitally deaf who had undergone cochlear implantation with multichannel implants, diagnosed as having Waardenburg syndrome, between 1992 and 2011. Postoperative performance outcomes were assessed and confronted with results obtained by children with non-syndromic congenital deafness also implanted in our department. Open-set auditory perception skills were evaluated by using European Portuguese speech discrimination tests (vowels test, monosyllabic word test, number word test and words in sentence test). Meaningful auditory integration scales (MAIS) and categories of auditory performance (CAP) were also measured. Speech production was further assessed and included results on meaningful use of speech Scale (MUSS) and speech intelligibility rating (SIR). To date, 6 implanted children were clinically identified as having WS type I, and one met the diagnosis of type II. All WS children received multichannel cochlear implants, with a mean age at implantation of 30.6±9.7months (ranging from 19 to 42months). Postoperative outcomes in WS children were similar to other nonsyndromic children. In addition, in number word and vowels discrimination test WS group showed slightly better performances, as well as in MUSS and MAIS assessment. Our study has shown that cochlear implantation should be considered a rehabilitative option for Waardenburg syndrome children with profound deafness, enabling the development and improvement of speech perception and production abilities in this group of patients, reinforcing their candidacy for this audio-oral rehabilitation method. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Aedo, Cristian; Tapia, Eduardo; Pavez, Elizabeth; Elgueda, Diego; Delano, Paul H; Robles, Luis
2015-01-01
There are two types of sensory cells in the mammalian cochlea, inner hair cells, which make synaptic contact with auditory-nerve afferent fibers, and outer hair cells that are innervated by crossed and uncrossed medial olivocochlear (MOC) efferent fibers. Contralateral acoustic stimulation activates the uncrossed efferent MOC fibers reducing cochlear neural responses, thus modifying the input to the central auditory system. The chinchilla, among all studied mammals, displays the lowest percentage of uncrossed MOC fibers raising questions about the strength and frequency distribution of the contralateral-sound effect in this species. On the other hand, MOC effects on cochlear sensitivity have been mainly studied in anesthetized animals and since the MOC-neuron activity depends on the level of anesthesia, it is important to assess the influence of anesthesia in the strength of efferent effects. Seven adult chinchillas (Chinchilla laniger) were chronically implanted with round-window electrodes in both cochleae. We compared the effect of contralateral sound in awake and anesthetized condition. Compound action potentials (CAP) and cochlear microphonics (CM) were measured in the ipsilateral cochlea in response to tones in absence and presence of contralateral sound. Control measurements performed after middle-ear muscles section in one animal discarded any possible middle-ear reflex activation. Contralateral sound produced CAP amplitude reductions in all chinchillas, with suppression effects greater by about 1-3 dB in awake than in anesthetized animals. In contrast, CM amplitude increases of up to 1.9 dB were found in only three awake chinchillas. In both conditions the strongest efferent effects were produced by contralateral tones at frequencies equal or close to those of ipsilateral tones. Contralateral CAP suppressions for 1-6 kHz ipsilateral tones corresponded to a span of uncrossed MOC fiber innervation reaching at least the central third of the chinchilla cochlea.
Tonotopically Ordered Traveling Waves in the Hearing Organs of Bushcrickets in-vivo
NASA Astrophysics Data System (ADS)
Udayashankar, Arun Palghat; Kössl, Manfred; Nowotny, Manuela
2011-11-01
Experimental investigation of auditory mechanics in the mammalian cochlea has been difficult to address in-vivo due to its secure housing inside the temporal bone. Here we studied the easily accessible hearing organ of bushcrickets, located in their forelegs, known as the crista acustica. A characteristic feature of the organ is that it is lined with an array of auditory receptors in a tonotopic fashion with lower frequencies processed at the proximal part and higher frequencies at the distal part of the foreleg. Each receptor cell is associated with so called cap cells. The cap cells, graded in size, are directly involved in the mechanics of transduction along with the part of the acoustic trachea that supports the cap cells. Functional similarities between the crista acustica and the vertebrate cochlea such as frequency selectivity and distortion product otoacoustic emissions have been well documented. In this study we used laser Doppler vibrometry to study the mechanics of the organ and observed sound induced traveling waves (TW) along it's length. Frequency representation was tonotopic with TW propagating from the high frequency to the low frequency region of the organ similar to the situation in the cochlea. Traveling wave velocity increased monotonically from 4 to 12 m/s for a frequency range of 6 to 60 kHz, reflecting a smaller topographic spread (organ length: 1 mm) compared to the guinea pig cochlea (organ length: 18 mm). The wavelength of the traveling wave decreased monotonically from 0.67 mm to 0.27 mm for the same frequency range. Vibration velocity of the organ reached noise threshold levels (10 μm/s) at 30 dB SPL for a frequency of 21 kHz. A small non-linear compression (73 dB increase in velocity for an 80 dB increase in SPL) was also observed at the 21 kHz. Our results indicate that bushcrickets can be a good model system for exploration of auditory mechanics in-vivo.
Auditory-motor learning influences auditory memory for music.
Brown, Rachel M; Palmer, Caroline
2012-05-01
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
Tan, C; Cao, Y; Hu, P
1998-09-01
Inquire into the mechanism of inner ear pathological physiology in autoimmune sensorineural hearing loss (ASHL). With the auditory electric-physiological techniques and enzyme-histochemical method, the change of inner ear hearing function and enzyme activity were observed. These animals, which threshold of auditory nerve compound active potential (CAP) and cochlear microphonic potential(CM) heightening evidently, showed that the amplitude of endolymphatic potential(EP) (include-EP) bring down in various degrees, which was related to the change of the active of Na(+)-K(+)-ATPase and SDH in vascularis stria and endolymphatic sac. The abnormality of enzymes metabolism in inner ear tissues, which following autoimmune inflammation damage, is the pathological foundation of hearing dysfunction.
Birman, Catherine S; Elliott, Elizabeth J; Gibson, William P R
2012-10-01
To determine the prevalence of additional disabilities in a pediatric cochlear population, to identify medical and radiologic conditions associated with additional disabilities, and to identify the effect of additional disabilities on speech perception and language at 12 months postoperatively. Retrospective case review. Tertiary referral center and cochlear implant program. Records were reviewed for children 0 to 16 years old inclusive, who had cochlear implant-related operations over a 12-month period. Diagnostic and rehabilitative. Additional disabilities prevalence; medical history and radiologic abnormalities; and the effect on Categories of Auditory Performance (CAP) score at 12 months postoperatively. Eighty-eight children having 96 operations were identified. The overall prevalence of additional disabilities (including developmental delay, cerebral palsy, visual impairment, autism and attention deficit disorder) was 33%. The main conditions associated with additional disabilities were syndromes and chromosomal abnormalities (87%), jaundice (86%), prematurity (62%), cytomegalovirus (60%), and inner ear abnormalities including cochlea nerve hypoplasia or aplasia (75%) and semicircular canal anomalies (56%). At 12 months postoperatively, almost all (96%) of the children without additional disabilities had a CAP score of 5 or greater (speech), compared with 52% of children with additional disabilities. Children with developmental delay had a median CAP score of 4, at 12 months compared with 6 for those without developmental delay. Additional disabilities are prevalent in approximately a third of pediatric cochlear implant patients. Additional disabilities significantly affect the outcomes of cochlear implants.
Kaiser, Andreas; Kale, Ajay; Novozhilova, Ekaterina; Siratirakun, Piyaporn; Aquino, Jorge B; Thonabulsombat, Charoensri; Ernfors, Patrik; Olivius, Petri
2014-05-30
Conditioned medium (CM), made by collecting medium after a few days in cell culture and then re-using it to further stimulate other cells, is a known experimental concept since the 1950s. Our group has explored this technique to stimulate the performance of cells in culture in general, and to evaluate stem- and progenitor cell aptitude for auditory nerve repair enhancement in particular. As compared to other mediums, all primary endpoints in our published experimental settings have weighed in favor of conditioned culture medium, where we have shown that conditioned culture medium has a stimulatory effect on cell survival. In order to explore the reasons for this improved survival we set out to analyze the conditioned culture medium. We utilized ELISA kits to investigate whether brain stem (BS) slice CM contains any significant amounts of brain-derived neurotrophic factor (BDNF) and glial cell derived neurotrophic factor (GDNF). We further looked for a donor cell with progenitor characteristics that would be receptive to BDNF and GDNF. We chose the well-documented boundary cap (BC) progenitor cells to be tested in our in vitro co-culture setting together with cochlear nucleus (CN) of the BS. The results show that BS CM contains BDNF and GDNF and that survival of BC cells, as well as BC cell differentiation into neurons, were enhanced when BS CM were used. Altogether, we conclude that BC cells transplanted into a BDNF and GDNF rich environment could be suitable for treatment of a traumatized or degenerated auditory nerve. Copyright © 2014 Elsevier B.V. All rights reserved.
Intelligence development of pre-lingual deaf children with unilateral cochlear implantation.
Chen, Mo; Wang, Zhaoyan; Zhang, Zhiwen; Li, Xun; Wu, Weijing; Xie, Dinghua; Xiao, Zi-An
2016-11-01
The present study aims to test whether deaf children with unilateral cochlear implantation (CI) have higher intelligence quotients (IQ). We also try to find out the predictive factors of intelligence development in deaf children with CI. Totally, 186 children were enrolled into this study. They were divided into 3 groups: CI group (N = 66), hearing loss group (N = 54) and normal hearing group (N = 66). All children took the Hiskey-Nebraska Test of Learning Aptitude to assess the IQ. After that, we used Deafness gene chip, Categories of Auditory Performance (CAP) and Speech Intelligibility Rating (SIR) methods to evaluate the genotype, auditory and speech performance, respectively. At baseline, the average IQ of hearing loss group (HL), CI group, normal hearing (NH) group were 98.3 ± 9.23, 100.03 ± 12.13 and 109.89 ± 10.56, while NH group scored higher significantly than HL and CI groups (p < 0.05). After 12 months, the average IQ of HL group, CI group, NH group were99.54 ± 9.38,111.85 ± 15.38, and 112.08 ± 8.51, respectively. No significant difference between the IQ of the CI and NH groups was found (p > 0.05). The growth of SIR was positive correlated with the growth of IQ (r = 0.247, p = 0.046), while no significant correlation were found between IQ growth and other possible factors, i.e. gender, age of CI, use of hearing aid, genotype, implant device type, inner ear malformation and CAP growth (p > 0.05). Our study suggests that CI potentially improves the intelligence development in deaf children. Speech performance growth is significantly correlated with IQ growth of CI children. Deaf children accepted CI before 6 years can achieve a satisfying and undifferentiated short-term (12 months) development of intelligence. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Auditory and motor imagery modulate learning in music performance
Brown, Rachel M.; Palmer, Caroline
2013-01-01
Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of auditory interference. Motor imagery aided pitch accuracy overall when interference conditions were manipulated at encoding (Experiment 1) but not at retrieval (Experiment 2). Thus, skilled performers' imagery abilities had distinct influences on encoding and retrieval of musical sequences. PMID:23847495
Ballistocardiogram Artifact Removal with a Reference Layer and Standard EEG Cap
Luo, Qingfei; Huang, Xiaoshan; Glover, Gary H.
2014-01-01
Background In simultaneous EEG-fMRI, the EEG recordings are severely contaminated by ballistocardiogram (BCG) artifacts, which are caused by cardiac pulsations. To reconstruct and remove the BCG artifacts, one promising method is to measure the artifacts in the absence of EEG signal by placing a group of electrodes (BCG electrodes) on a conductive layer (reference layer) insulated from the scalp. However, current BCG reference layer (BRL) methods either use a customized EEG cap composed of electrode pairs, or need to construct the custom reference layer through additional model-building experiments for each EEG-fMRI experiment. These requirements have limited the versatility and efficiency of BRL. The aim of this study is to propose a more practical and efficient BRL method and compare its performance with the most popular BCG removal method, the optimal basis sets (OBS) algorithm. New Method By designing the reference layer as a permanent and reusable cap, the new BRL method is able to be used with a standard EEG cap, and no extra experiments and preparations are needed to use the BRL in an EEG-fMRI experiment. Results The BRL method effectively removed the BCG artifacts from both oscillatory and evoked potential scalp recordings and recovered the EEG signal. Comparison with Existing Method Compared to the OBS, this new BRL method improved the contrast-to-noise ratios of the alpha-wave, visual, and auditory evoked potential signals by 101%, 76%, and 75% respectively, employing 160 BCG electrodes. Using only 20 BCG electrodes, the BRL improved the EEG signal by 74%/26%/41% respectively. Conclusion The proposed method can substantially improve the EEG signal quality compared with traditional methods. PMID:24960423
Lichtenhan, Jeffery T.; Chertoff, Mark E.
2008-01-01
An analytic compound action potential (CAP) obtained by convolving functional representations of the post-stimulus time histogram summed across auditory nerve neurons [P(t)] and a single neuron action potential [U(t)] was fit to human CAPs. The analytic CAP fit to pre- and postnoise-induced temporary hearing threshold shift (TTS) estimated in vivoP(t) and U(t) and the number of neurons contributing to the CAPs (N). The width of P(t) decreased with increasing signal level and was wider at the lowest signal level following noise exposure. P(t) latency decreased with increasing signal level and was shorter at all signal levels following noise exposure. The damping and oscillatory frequency of U(t) increased with signal level. For subjects with large amounts of TTS, U(t) had greater damping than before noise exposure particularly at low signal levels. Additionally, U(t) oscillation was lower in frequency at all click intensities following noise exposure. N increased with signal level and was smaller after noise exposure at the lowest signal level. Collectively these findings indicate that neurons contributing to the CAP during TTS are fewer in number, shorter in latency, and poorer in synchrony than before noise exposure. Moreover, estimates of single neuron action potentials may decay more rapidly and have a lower oscillatory frequency during TTS. PMID:18397026
A partial hearing animal model for chronic electro-acoustic stimulation
NASA Astrophysics Data System (ADS)
Irving, S.; Wise, A. K.; Millard, R. E.; Shepherd, R. K.; Fallon, J. B.
2014-08-01
Objective. Cochlear implants (CIs) have provided some auditory function to hundreds of thousands of people around the world. Although traditionally carried out only in profoundly deaf patients, the eligibility criteria for implantation have recently been relaxed to include many partially-deaf patients with useful levels of hearing. These patients receive both electrical stimulation from their implant and acoustic stimulation via their residual hearing (electro-acoustic stimulation; EAS) and perform very well. It is unclear how EAS improves speech perception over electrical stimulation alone, and little evidence exists about the nature of the interactions between electric and acoustic stimuli. Furthermore, clinical results suggest that some patients that undergo cochlear implantation lose some, if not all, of their residual hearing, reducing the advantages of EAS over electrical stimulation alone. A reliable animal model with clinically-relevant partial deafness combined with clinical CIs is important to enable these issues to be studied. This paper outlines such a model that has been successfully used in our laboratory. Approach. This paper outlines a battery of techniques used in our laboratory to generate, validate and examine an animal model of partial deafness and chronic CI use. Main results. Ototoxic deafening produced bilaterally symmetrical hearing thresholds in neonatal and adult animals. Electrical activation of the auditory system was confirmed, and all animals were chronically stimulated via adapted clinical CIs. Acoustic compound action potentials (CAPs) were obtained from partially-hearing cochleae, using the CI amplifier. Immunohistochemical analysis allows the effects of deafness and electrical stimulation on cell survival to be studied. Significance. This animal model has applications in EAS research, including investigating the functional interactions between electric and acoustic stimulation, and the development of techniques to maintain residual hearing following cochlear implantation. The ability to record CAPs via the CI has clinical direct relevance for obtaining objective measures of residual hearing.
Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne
2016-12-01
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.
Ryu, Nam-Gyu; Moon, Il Joon; Chang, Young Soo; Kim, Byoung Kil; Chung, Won-Ho; Cho, Yang-Sun; Hong, Sung Hwa
2015-12-01
Neuroblastoma (NBL) predominantly affects children under 5 years of age. Through multimodal therapy, including chemotherapy, radiotherapy, surgery, and peripheral blood stem cell transplantation, the survival rate in patients with NBL have improved while treatment-related complications have also increased. Treatment-related ototoxicity, mainly from cisplatin, can result in profound hearing loss requiring cochlear implantation (CI). We analyzed the effectiveness and hearing preservation of CI recipients who had treated with multimodal therapy due to NBL. Patients who received multimodal therapy for NBL and subsequent CIs were enrolled. A detailed review of the perioperative hearing test, speech evaluation, and posttreatment complications was conducted. Speech performance was analyzed using the category of auditory performance (CAP) score and the postoperative hearing preservation of low frequencies was also compared. Patients who were candidates for electro-acoustic stimulation (EAS) used an EAS electrode for low frequency hearing preservation. Three patients were identified and all patients showed improvement of speech performance after CI. The average of CAP score improved from 4.3 preoperatively to 5.8 at 1 year postoperatively. Two patients who were fitted with the Flex electrode showed complete hearing preservation and the preserved hearing was maintained over 1 year. The one remaining patient was given the standard CI-512 electrode and showed partial hearing preservation. Patients with profound hearing loss resulting from NBL multimodal therapy can be good candidates for CI, especially for EAS. A soft surgical technique as well as a specifically designed electrode should be applied to this specific population during the CI operation in order to preserve residual hearing and achieve better outcomes.
Human neural tuning estimated from compound action potentials in normal hearing human volunteers
NASA Astrophysics Data System (ADS)
Verschooten, Eric; Desloovere, Christian; Joris, Philip X.
2015-12-01
The sharpness of cochlear frequency tuning in humans is debated. Evoked otoacoustic emissions and psychophysical measurements suggest sharper tuning in humans than in laboratory animals [15], but this is disputed based on comparisons of behavioral and electrophysiological measurements across species [14]. Here we used evoked mass potentials to electrophysiologically quantify tuning (Q10) in humans. We combined a notched noise forward masking paradigm [9] with the recording of trans tympanic compound action potentials (CAP) from masked probe tones in awake human and anesthetized monkey (Macaca mulatta). We compare our results to data obtained with the same paradigm in cat and chinchilla [16], and find that CAP-Q10values in human are ˜1.6x higher than in cat and chinchilla and ˜1.3x higher than in monkey. To estimate frequency tuning of single auditory nerve fibers (ANFs) in humans, we derive conversion functions from ANFs in cat, chinchilla, and monkey and apply these to the human CAP measurements. The data suggest that sharp cochlear tuning is a feature of old-world primates.
Impact of Educational Level on Performance on Auditory Processing Tests.
Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane
2016-01-01
Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.
Bevis, Zoe L; Semeraro, Hannah D; van Besouw, Rachel M; Rowan, Daniel; Lineton, Ben; Allsopp, Adrian J
2014-01-01
In order to preserve their operational effectiveness and ultimately their survival, military personnel must be able to detect important acoustic signals and maintain situational awareness. The possession of sufficient hearing ability to perform job-specific auditory tasks is defined as auditory fitness for duty (AFFD). Pure tone audiometry (PTA) is used to assess AFFD in the UK military; however, it is unclear whether PTA is able to accurately predict performance on job-specific auditory tasks. The aim of the current study was to gather information about auditory tasks carried out by infantry personnel on the frontline and the environment these tasks are performed in. The study consisted of 16 focus group interviews with an average of five participants per group. Eighty British army personnel were recruited from five infantry regiments. The focus group guideline included seven open-ended questions designed to elicit information about the auditory tasks performed on operational duty. Content analysis of the data resulted in two main themes: (1) the auditory tasks personnel are expected to perform and (2) situations where personnel felt their hearing ability was reduced. Auditory tasks were divided into subthemes of sound detection, speech communication and sound localization. Reasons for reduced performance included background noise, hearing protection and attention difficulties. The current study provided an important and novel insight to the complex auditory environment experienced by British infantry personnel and identified 17 auditory tasks carried out by personnel on operational duties. These auditory tasks will be used to inform the development of a functional AFFD test for infantry personnel.
Removal of BCG artifacts using a non-Kirchhoffian overcomplete representation.
Dyrholm, Mads; Goldman, Robin; Sajda, Paul; Brown, Truman R
2009-02-01
We present a nonlinear unmixing approach for extracting the ballistocardiogram (BCG) from EEG recorded in an MR scanner during simultaneous acquisition of functional MRI (fMRI). First, an overcomplete basis is identified in the EEG based on a custom multipath EEG electrode cap. Next, the overcomplete basis is used to infer non-Kirchhoffian latent variables that are not consistent with a conservative electric field. Neural activity is strictly Kirchhoffian while the BCG artifact is not, and the representation can hence be used to remove the artifacts from the data in a way that does not attenuate the neural signals needed for optimal single-trial classification performance. We compare our method to more standard methods for BCG removal, namely independent component analysis and optimal basis sets, by looking at single-trial classification performance for an auditory oddball experiment. We show that our overcomplete representation method for removing BCG artifacts results in better single-trial classification performance compared to the conventional approaches, indicating that the derived neural activity in this representation retains the complex information in the trial-to-trial variability.
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
2017-04-01
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI.
Zhou, Sijie; Allison, Brendan Z; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing
2016-01-01
Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.
Baltus, Alina; Vosskuhl, Johannes; Boetzel, Cindy; Herrmann, Christoph Siegfried
2018-05-13
Recent research provides evidence for a functional role of brain oscillations for perception. For example, auditory temporal resolution seems to be linked to individual gamma frequency of auditory cortex. Individual gamma frequency not only correlates with performance in between-channel gap detection tasks but can be modulated via auditory transcranial alternating current stimulation. Modulation of individual gamma frequency is accompanied by an improvement in gap detection performance. Aging changes electrophysiological frequency components and sensory processing mechanisms. Therefore, we conducted a study to investigate the link between individual gamma frequency and gap detection performance in elderly people using auditory transcranial alternating current stimulation. In a within-subject design, twelve participants were electrically stimulated with two individualized transcranial alternating current stimulation frequencies: 3 Hz above their individual gamma frequency (experimental condition) and 4 Hz below their individual gamma frequency (control condition) while they were performing a between-channel gap detection task. As expected, individual gamma frequencies correlated significantly with gap detection performance at baseline and in the experimental condition, transcranial alternating current stimulation modulated gap detection performance. In the control condition, stimulation did not modulate gap detection performance. In addition, in elderly, the effect of transcranial alternating current stimulation on auditory temporal resolution seems to be dependent on endogenous frequencies in auditory cortex: elderlies with slower individual gamma frequencies and lower auditory temporal resolution profit from auditory transcranial alternating current stimulation and show increased gap detection performance during stimulation. Our results strongly suggest individualized transcranial alternating current stimulation protocols for successful modulation of performance. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Cullington, H E; Bele, D; Brinton, J C; Cooper, S; Daft, M; Harding, J; Hatton, N; Humphries, J; Lutman, M E; Maddocks, J; Maggs, J; Millward, K; O'Donoghue, G; Patel, S; Rajput, K; Salmon, V; Sear, T; Speers, A; Wheeler, A; Wilson, K
2017-01-01
This fourteen-centre project used professional rating scales and parent questionnaires to assess longitudinal outcomes in a large non-selected population of children receiving simultaneous and sequential bilateral cochlear implants. This was an observational non-randomized service evaluation. Data were collected at four time points: before bilateral cochlear implants or before the sequential implant, one year, two years, and three years after. The measures reported are Categories of Auditory Performance II (CAPII), Speech Intelligibility Rating (SIR), Bilateral Listening Skills Profile (BLSP) and Parent Outcome Profile (POP). Thousand and one children aged from 8 months to almost 18 years were involved, although there were many missing data. In children receiving simultaneous implants after one, two, and three years respectively, median CAP scores were 4, 5, and 6; median SIR were 1, 2, and 3. Three years after receiving simultaneous bilateral cochlear implants, 61% of children were reported to understand conversation without lip-reading and 66% had intelligible speech if the listener concentrated hard. Auditory performance and speech intelligibility were significantly better in female children than males. Parents of children using sequential implants were generally positive about their child's well-being and behaviour since receiving the second device; those who were less positive about well-being changes also generally reported their children less willing to wear the second device. Data from 78% of paediatric cochlear implant centres in the United Kingdom provide a real-world picture of outcomes of children with bilateral implants in the UK. This large reference data set can be used to identify children in the lower quartile for targeted intervention.
Pilcher, June J; Jennings, Kristen S; Phillips, Ginger E; McCubbin, James A
2016-11-01
The current study investigated performance on a dual auditory task during a simulated night shift. Night shifts and sleep deprivation negatively affect performance on vigilance-based tasks, but less is known about the effects on complex tasks. Because language processing is necessary for successful work performance, it is important to understand how it is affected by night work and sleep deprivation. Sixty-two participants completed a simulated night shift resulting in 28 hr of total sleep deprivation. Performance on a vigilance task and a dual auditory language task was examined across four testing sessions. The results indicate that working at night negatively impacts vigilance, auditory attention, and comprehension. The effects on the auditory task varied based on the content of the auditory material. When the material was interesting and easy, the participants performed better. Night work had a greater negative effect when the auditory material was less interesting and more difficult. These findings support research that vigilance decreases during the night. The results suggest that auditory comprehension suffers when individuals are required to work at night. Maintaining attention and controlling effort especially on passages that are less interesting or more difficult could improve performance during night shifts. The results from the current study apply to many work environments where decision making is necessary in response to complex auditory information. Better predicting the effects of night work on language processing is important for developing improved means of coping with shiftwork. © 2016, Human Factors and Ergonomics Society.
[Multidimensionality of inner speech and its relationship with abnormal perceptions].
Tamayo-Agudelo, William; Vélez-Urrego, Juan David; Gaviria-Castaño, Gilberto; Perona-Garcelán, Salvador
Inner speech is a common human experience. Recently, there have been studies linking this experience with cognitive functions, such as problem solving, reading, writing, autobiographical memory, and some disorders, such as anxiety and depression. In addition, inner speech is recognised as the main source of auditory hallucinations. The main purpose of this study is to establish the factor structure of Varieties of Inner Speech Questionnaire (VISQ) in a sample of the Colombian population. Furthermore, it aims at establishing a link between VISQ and abnormal perceptions. This was a cross-sectional study in which 232 college students were assessed using the VISQ and the Cardiff Anomalous Perceptions Scale (CAPS). Through an exploratory factor analysis, a structure of three factors was found: Other Voices in the Internal Speech, Condensed Inner speech, and Dialogical/Evaluative Inner speech, all of them with acceptable levels of reliability. Gender differences were found in the second and third factor, with higher averages for women. Positive correlations were found among the three VISQ and the two CAPS factors: Multimodal Perceptual Alterations and Experiences Associated with the Temporal Lobe. The results are consistent with previous findings linking the factors of inner speech with the propensity to auditory hallucination, a phenomenon widely associated with temporal lobe abnormalities. The hallucinations associated with other perceptual systems, however, are still weakly explained. Copyright © 2016 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.
Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam
2011-01-01
To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.
Oba, Sandra I.; Galvin, John J.; Fu, Qian-Jie
2014-01-01
Auditory training has been shown to significantly improve cochlear implant (CI) users’ speech and music perception. However, it is unclear whether post-training gains in performance were due to improved auditory perception or to generally improved attention, memory and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory were assessed in ten CI users before, during, and after training with a non-auditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Post-training gains were much smaller with the non-auditory VDS training than observed in previous auditory training studies with CI users. The results suggest that post-training gains observed in previous studies were not solely attributable to improved attention or memory, and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception. PMID:23516087
Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.
Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina
2015-07-01
It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.
Avey, Marc T; Hoeschele, Marisa; Moscicki, Michele K; Bloomfield, Laurie L; Sturdy, Christopher B
2011-01-01
Songbird auditory areas (i.e., CMM and NCM) are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators. Mobbing calls produced in response to smaller, higher-threat predators contain more "D" notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG) expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned.
Auditory Processing of Older Adults with Probable Mild Cognitive Impairment
ERIC Educational Resources Information Center
Edwards, Jerri D.; Lister, Jennifer J.; Elias, Maya N.; Tetlow, Amber M.; Sardina, Angela L.; Sadeq, Nasreen A.; Brandino, Amanda D.; Bush, Aryn L. Harrison
2017-01-01
Purpose: Studies suggest that deficits in auditory processing predict cognitive decline and dementia, but those studies included limited measures of auditory processing. The purpose of this study was to compare older adults with and without probable mild cognitive impairment (MCI) across two domains of auditory processing (auditory performance in…
ERIC Educational Resources Information Center
Fassler, Joan
The study investigated the task performance of cerebral palsied children under conditions of reduced auditory input and under normal auditory conditions. A non-cerebral palsied group was studied in a similar manner. Results indicated that cerebral palsied children showed some positive change in performance, under conditions of reduced auditory…
A Cost and Performance System (CAPS) in a Federal agency
NASA Technical Reports Server (NTRS)
Huseonia, W. F.; Penton, P. G.
1994-01-01
Cost and Performance System (CAPS) is an automated system used from the planning phase through implementation to analysis and documentation. Data is retrievable or available for analysis of cost versus performance anomalies. CAPS provides a uniform system across intra- and international elements. A common system is recommended throughout an entire cost or profit center. Data can be easily accumulated and aggregated into higher levels of tracking and reporting of cost and performance.The level and quality of performance or productivity is indicated in the CAPS model and its process. The CAPS model provides the necessary decision information and insight to the principal investigator/project engineer for a successful project management experience. CAPS provides all levels of management with the appropriate detailed level of data.
Auditory processing deficits in bipolar disorder with and without a history of psychotic features.
Zenisek, RyAnna; Thaler, Nicholas S; Sutton, Griffin P; Ringdahl, Erik N; Snyder, Joel S; Allen, Daniel N
2015-11-01
Auditory perception deficits have been identified in schizophrenia (SZ) and linked to dysfunction in the auditory cortex. Given that psychotic symptoms, including auditory hallucinations, are also seen in bipolar disorder (BD), it may be that individuals with BD who also exhibit psychotic symptoms demonstrate a similar impairment in auditory perception. Fifty individuals with SZ, 30 individuals with bipolar I disorder with a history of psychosis (BD+), 28 individuals with bipolar I disorder with no history of psychotic features (BD-), and 29 normal controls (NC) were administered a tone discrimination task and an emotion recognition task. Mixed-model analyses of covariance with planned comparisons indicated that individuals with BD+ performed at a level that was intermediate between those with BD- and those with SZ on the more difficult condition of the tone discrimination task and on the auditory condition of the emotion recognition task. There were no differences between the BD+ and BD- groups on the visual or auditory-visual affect recognition conditions. Regression analyses indicated that performance on the tone discrimination task predicted performance on all conditions of the emotion recognition task. Auditory hallucinations in BD+ were not related to performance on either task. Our findings suggested that, although deficits in frequency discrimination and emotion recognition are more severe in SZ, these impairments extend to BD+. Although our results did not support the idea that auditory hallucinations may be related to these deficits, they indicated that basic auditory deficits may be a marker for psychosis, regardless of SZ or BD diagnosis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Estimating subglottal pressure via airflow interruption with auditory masking.
Hoffman, Matthew R; Jiang, Jack J
2009-11-01
Current noninvasive measurement of subglottal pressure using airflow interruption often produces inconsistent results due to the elicitation of audio-laryngeal reflexes. Auditory feedback could be considered as a means of ensuring measurement accuracy and precision. The purpose of this study was to determine if auditory masking could be used with the airflow interruption system to improve intrasubject consistency. A prerecorded sample of subject phonation was played on a loop over headphones during the trials with auditory masking. This provided subjects with a target pitch and blocked out distracting ambient noise created by the airflow interrupter. Subglottal pressure was noninvasively measured using the airflow interruption system. Thirty subjects, divided into two equal groups, performed 10 trials without auditory masking and 10 trials with auditory masking. Group one performed the normal trials first, followed by the trials with auditory masking. Group two performed the auditory masking trials first, followed by the normal trials. Intrasubject consistency was improved by adding auditory masking, resulting in a decrease in average intrasubject standard deviation from 0.93+/-0.51 to 0.47+/-0.22 cm H(2)O (P < 0.001). Auditory masking can be used effectively to combat audio-laryngeal reflexes and aid subjects in maintaining constant glottal configuration and frequency, thereby increasing intrasubject consistency when measuring subglottal pressure. By considering auditory feedback, a more reliable method of measurement was developed. This method could be used by clinicians, as reliable, immediately available values of subglottal pressure are useful in evaluating laryngeal health and monitoring treatment progress.
Can spectro-temporal complexity explain the autistic pattern of performance on auditory tasks?
Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter
2006-01-01
To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material (pure tones) and/or low-level operations (detection, labelling, chord disembedding, detection of pitch changes) show a superior level of performance and shorter ERP latencies. In contrast, tasks involving spectrally- and temporally-dynamic material and/or complex operations (evaluation, attention) are poorly performed by autistics, or generate inferior ERP activity or brain activation. Neural complexity required to perform auditory tasks may therefore explain pattern of performance and activation of autistic individuals during auditory tasks.
Is sensorimotor BCI performance influenced differently by mono, stereo, or 3-D auditory feedback?
McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh
2014-05-01
Imagination of movement can be used as a control method for a brain-computer interface (BCI) allowing communication for the physically impaired. Visual feedback within such a closed loop system excludes those with visual problems and hence there is a need for alternative sensory feedback pathways. In the context of substituting the visual channel for the auditory channel, this study aims to add to the limited evidence that it is possible to substitute visual feedback for its auditory equivalent and assess the impact this has on BCI performance. Secondly, the study aims to determine for the first time if the type of auditory feedback method influences motor imagery performance significantly. Auditory feedback is presented using a stepped approach of single (mono), double (stereo), and multiple (vector base amplitude panning as an audio game) loudspeaker arrangements. Visual feedback involves a ball-basket paradigm and a spaceship game. Each session consists of either auditory or visual feedback only with runs of each type of feedback presentation method applied in each session. Results from seven subjects across five sessions of each feedback type (visual, auditory) (10 sessions in total) show that auditory feedback is a suitable substitute for the visual equivalent and that there are no statistical differences in the type of auditory feedback presented across five sessions.
Bellis, Teri James; Ross, Jody
2011-09-01
It has been suggested that, in order to validate a diagnosis of (C)APD (central auditory processing disorder), testing using direct cross-modal analogs should be performed to demonstrate that deficits exist solely or primarily in the auditory modality (McFarland and Cacace, 1995; Cacace and McFarland, 2005). This modality-specific viewpoint is controversial and not universally accepted (American Speech-Language-Hearing Association [ASHA], 2005; Musiek et al, 2005). Further, no such analogs have been developed to date, and neither the feasibility of such testing in normally functioning individuals nor the concurrent validity of cross-modal analogs has been established. The purpose of this study was to investigate the feasibility of cross-modal testing by examining the performance of normal adults and children on four tests of central auditory function and their corresponding visual analogs. In addition, this study investigated the degree to which concurrent validity of auditory and visual versions of these tests could be demonstrated. An experimental repeated measures design was employed. Participants consisted of two groups (adults, n=10; children, n=10) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of auditory/otologic, language, learning, neurologic, or related disorders. Visual analogs of four tests in common clinical use for the diagnosis of (C)APD were developed (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; Duration Patterns [Pinheiro and Musiek, 1985]; and the Random Gap Detection Test [RGDT; Keith, 2000]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANOVAs (analyses of variance) were used to examine effects of group, modality, and laterality (for the Dichotic/Dichoptic Digits tests) or response condition (for the auditory and visual Frequency Patterns and Duration Patterns tests). Pearson product-moment correlations were used to investigate relationships between auditory and visual performance. Adults performed significantly better than children on the Dichotic/Dichoptic Digits tests. Results also revealed a significant effect of modality, with auditory better than visual, and a significant modality×laterality interaction, with a right-ear advantage seen for the auditory task and a left-visual-field advantage seen for the visual task. For the Frequency Patterns test and its visual analog, results revealed a significant modality×response condition interaction, with humming better than labeling for the auditory version but the reversed effect for the visual version. For Duration Patterns testing, visual performance was significantly poorer than auditory performance. Due to poor test-retest reliability and ceiling effects for the auditory and visual gap-detection tasks, analyses could not be performed. No cross-modal correlations were observed for any test. Results demonstrated that cross-modal testing is at least feasible using easily accessible computer hardware and software. The lack of any cross-modal correlations suggests independent processing mechanisms for auditory and visual versions of each task. Examination of performance in individuals with central auditory and pan-sensory disorders is needed to determine the utility of cross-modal analogs in the differential diagnosis of (C)APD. American Academy of Audiology.
The experience of agency in sequence production with altered auditory feedback.
Couchman, Justin J; Beasley, Robertson; Pfordresher, Peter Q
2012-03-01
When speaking or producing music, people rely in part on auditory feedback - the sounds associated with the performed action. Three experiments investigated the degree to which alterations of auditory feedback (AAF) during music performances influence the experience of agency (i.e., the sense that your actions led to auditory events) and the possible link between agency and the disruptive effect of AAF on production. Participants performed short novel melodies from memory on a keyboard. Auditory feedback during performances was manipulated with respect to its pitch contents and/or its synchrony with actions. Participants rated their experience of agency after each trial. In all experiments, AAF reduced judgments of agency across conditions. Performance was most disrupted (measured by error rates and slowing) when AAF led to an ambiguous experience of agency, suggesting that there may be some causal relationship between agency and disruption. However, analyses revealed that these two effects were probably independent. A control experiment verified that performers can make veridical judgments of agency. Published by Elsevier Inc.
Meyerhoff, Hauke S; Huff, Markus
2016-04-01
Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.
Cornell Kärnekull, Stina; Arshamian, Artin; Nilsson, Mats E.; Larsson, Maria
2016-01-01
Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15), late blind (n = 15), and sighted (n = 30) participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA) showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity. PMID:27729884
Auditory memory function in expert chess players.
Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona
2015-01-01
Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time.
Temporal auditory aspects in children with poor school performance and associated factors.
Rezende, Bárbara Antunes; Lemos, Stela Maris Aguiar; Medeiros, Adriane Mesquita de
2016-01-01
To investigate the auditory temporal aspects in children with poor school performance aged 7-12 years and their association with behavioral aspects, health perception, school and health profiles, and sociodemographic factors. This is an observational, analytical, transversal study including 89 children with poor school performance aged 7-12 years enrolled in the municipal public schools of a municipality in Minas Gerais state, participants of Specialized Educational Assistance. The first stage of the study was conducted with the subjects' parents aiming to collect information on sociodemographic aspects, health profile, and educational records. In addition, the parents responded to the Strengths and Difficulties Questionnaire (SDQ). The second stage was conducted with the children in order to investigate their health self-perception and analyze the auditory assessment, which consisted of meatoscopy, Transient Otoacoustic Emissions, and tests that evaluated the aspects of simple auditory temporal ordering and auditory temporal resolution. Tests assessing the temporal aspects of auditory temporal processing were considered as response variables, and the explanatory variables were grouped for univariate and multivariate logistic regression analyses. The level of significance was set at 5%. Significant statistical correlation was found between the auditory temporal aspects and the variables age, gender, presence of repetition, and health self-perception. Children with poor school performance presented changes in the auditory temporal aspects. The temporal abilities assessed suggest association with different factors such as maturational process, health self-perception, and school records.
Pillai, Roshni; Yathiraj, Asha
2017-09-01
The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.
Adel, Youssef; Hilkhuysen, Gaston; Noreña, Arnaud; Cazals, Yves; Roman, Stéphane; Macherey, Olivier
2017-06-01
Electrical stimulation of auditory nerve fibers using cochlear implants (CI) shows psychophysical forward masking (pFM) up to several hundreds of milliseconds. By contrast, recovery of electrically evoked compound action potentials (eCAPs) from forward masking (eFM) was shown to be more rapid, with time constants no greater than a few milliseconds. These discrepancies suggested two main contributors to pFM: a rapid-recovery process due to refractory properties of the auditory nerve and a slow-recovery process arising from more central structures. In the present study, we investigate whether the use of different maskers between eCAP and psychophysical measures, specifically single-pulse versus pulse train maskers, may have been a source of confound.In experiment 1, we measured eFM using the following: a single-pulse masker, a 300-ms low-rate pulse train masker (LTM, 250 pps), and a 300-ms high-rate pulse train masker (HTM, 5000 pps). The maskers were presented either at same physical current (Φ) or at same perceptual (Ψ) level corresponding to comfortable loudness. Responses to a single-pulse probe were measured for masker-probe intervals ranging from 1 to 512 ms. Recovery from masking was much slower for pulse trains than for the single-pulse masker. When presented at Φ level, HTM produced more and longer-lasting masking than LTM. However, results were inconsistent when LTM and HTM were compared at Ψ level. In experiment 2, masked detection thresholds of single-pulse probes were measured using the same pulse train masker conditions. In line with our eFM findings, masked thresholds for HTM were higher than those for LTM at Φ level. However, the opposite result was found when the pulse trains were presented at Ψ level.Our results confirm the presence of slow-recovery phenomena at the level of the auditory nerve in CI users, as previously shown in animal studies. Inconsistencies between eFM and pFM results, despite using the same masking conditions, further underline the importance of comparing electrophysiological and psychophysical measures with identical stimulation paradigms.
Procedures for central auditory processing screening in schoolchildren.
Carvalho, Nádia Giulian de; Ubiali, Thalita; Amaral, Maria Isabel Ramos do; Santos, Maria Francisca Colella
2018-03-22
Central auditory processing screening in schoolchildren has led to debates in literature, both regarding the protocol to be used and the importance of actions aimed at prevention and promotion of auditory health. Defining effective screening procedures for central auditory processing is a challenge in Audiology. This study aimed to analyze the scientific research on central auditory processing screening and discuss the effectiveness of the procedures utilized. A search was performed in the SciELO and PUBMed databases by two researchers. The descriptors used in Portuguese and English were: auditory processing, screening, hearing, auditory perception, children, auditory tests and their respective terms in Portuguese. original articles involving schoolchildren, auditory screening of central auditory skills and articles in Portuguese or English. studies with adult and/or neonatal populations, peripheral auditory screening only, and duplicate articles. After applying the described criteria, 11 articles were included. At the international level, central auditory processing screening methods used were: screening test for auditory processing disorder and its revised version, screening test for auditory processing, scale of auditory behaviors, children's auditory performance scale and Feather Squadron. In the Brazilian scenario, the procedures used were the simplified auditory processing assessment and Zaidan's battery of tests. At the international level, the screening test for auditory processing and Feather Squadron batteries stand out as the most comprehensive evaluation of hearing skills. At the national level, there is a paucity of studies that use methods evaluating more than four skills, and are normalized by age group. The use of simplified auditory processing assessment and questionnaires can be complementary in the search for an easy access and low-cost alternative in the auditory screening of Brazilian schoolchildren. Interactive tools should be proposed, that allow the selection of as many hearing skills as possible, validated by comparison with the battery of tests used in the diagnosis. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Sport stacking in auditory and visual attention of grade 3 learners.
Mortimer, J; Krysztofiak, J; Custard, S; McKune, A J
2011-08-01
The effect of sport stacking on auditory and visual attention in 32 Grade 3 children was examined using a randomised, cross-over design. Children were randomly assigned to a sport stacking (n=16) or arts/crafts group (n=16) with these activities performed over 3 wk. (12 30-min. sessions, 4 per week). This was followed by a 3-wk. wash-out period after which there was a cross-over and the 3-wk. intervention repeated, with the sports stacking group performing arts/crafts and the arts/crafts group performing sports stacking. Performance on the Integrated Visual and Auditory Continuous Performance Test, a measure of auditory and visual attention, was assessed before and after each of the 3-wk. interventions for each group. Comparisons indicated that sport stacking resulted in significant improvement in high demand function and fine motor regulation, while it caused a significant reduction in low demand function. Auditory and visual attention adaptations to sport stacking may be specific to the high demand nature of the task.
Should visual speech cues (speechreading) be considered when fitting hearing aids?
NASA Astrophysics Data System (ADS)
Grant, Ken
2002-05-01
When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.
Task-specific reorganization of the auditory cortex in deaf humans
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-01
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964
Task-specific reorganization of the auditory cortex in deaf humans.
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-24
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.
Auditory perception modulated by word reading.
Cao, Liyu; Klepp, Anne; Schnitzler, Alfons; Gross, Joachim; Biermann-Ruben, Katja
2016-10-01
Theories of embodied cognition positing that sensorimotor areas are indispensable during language comprehension are supported by neuroimaging and behavioural studies. Among others, the auditory system has been suggested to be important for understanding sound-related words (visually presented) and the motor system for action-related words. In this behavioural study, using a sound detection task embedded in a lexical decision task, we show that in participants with high lexical decision performance sound verbs improve auditory perception. The amount of modulation was correlated with lexical decision performance. Our study provides convergent behavioural evidence of auditory cortex involvement in word processing, supporting the view of embodied language comprehension concerning the auditory domain.
Barker, Matthew D; Purdy, Suzanne C
2016-01-01
This research investigates a novel method for identifying and measuring school-aged children with poor auditory processing through a tablet computer. Feasibility and test-retest reliability are investigated by examining the percentage of Group 1 participants able to complete the tasks and developmental effects on performance. Concurrent validity was investigated against traditional tests of auditory processing using Group 2. There were 847 students aged 5 to 13 years in group 1, and 46 aged 5 to 14 years in group 2. Some tasks could not be completed by the youngest participants. Significant correlations were found between results of most auditory processing areas assessed by the Feather Squadron test and traditional auditory processing tests. Test-retest comparisons indicated good reliability for most of the Feather Squadron assessments and some of the traditional tests. The results indicate the Feather Squadron assessment is a time-efficient, feasible, concurrently valid, and reliable approach for measuring auditory processing in school-aged children. Clinically, this may be a useful option for audiologists when performing auditory processing assessments as it is a relatively fast, engaging, and easy way to assess auditory processing abilities. Research is needed to investigate further the construct validity of this new assessment by examining the association between performance on Feather Squadron and objective evoked potential, lesion studies, and/or functional imaging measures of auditory function.
Schelinski, Stefanie; Riedel, Philipp; von Kriegstein, Katharina
2014-12-01
In auditory-only conditions, for example when we listen to someone on the phone, it is essential to fast and accurately recognize what is said (speech recognition). Previous studies have shown that speech recognition performance in auditory-only conditions is better if the speaker is known not only by voice, but also by face. Here, we tested the hypothesis that such an improvement in auditory-only speech recognition depends on the ability to lip-read. To test this we recruited a group of adults with autism spectrum disorder (ASD), a condition associated with difficulties in lip-reading, and typically developed controls. All participants were trained to identify six speakers by name and voice. Three speakers were learned by a video showing their face and three others were learned in a matched control condition without face. After training, participants performed an auditory-only speech recognition test that consisted of sentences spoken by the trained speakers. As a control condition, the test also included speaker identity recognition on the same auditory material. The results showed that, in the control group, performance in speech recognition was improved for speakers known by face in comparison to speakers learned in the matched control condition without face. The ASD group lacked such a performance benefit. For the ASD group auditory-only speech recognition was even worse for speakers known by face compared to speakers not known by face. In speaker identity recognition, the ASD group performed worse than the control group independent of whether the speakers were learned with or without face. Two additional visual experiments showed that the ASD group performed worse in lip-reading whereas face identity recognition was within the normal range. The findings support the view that auditory-only communication involves specific visual mechanisms. Further, they indicate that in ASD, speaker-specific dynamic visual information is not available to optimize auditory-only speech recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.
Semeraro, Hannah D; Bevis, Zoë L; Rowan, Daniel; van Besouw, Rachel M; Allsopp, Adrian J
2015-01-01
The ability to listen to commands in noisy environments and understand acoustic signals, while maintaining situational awareness, is an important skill for military personnel and can be critical for mission success. Seventeen auditory tasks carried out by British infantry and combat-support personnel were identified through a series of focus groups conducted by Bevis et al. For military personnel, these auditory tasks are termed mission-critical auditory tasks (MCATs) if they are carried in out in a military-specific environment and have a negative consequence when performed below a specified level. A questionnaire study was conducted to find out which of the auditory tasks identified by Bevis et al. satisfy the characteristics of an MCAT. Seventy-nine British infantry and combat-support personnel from four regiments across the South of England participated. For each auditory task participants indicated: 1) the consequences of poor performance on the task, 2) who performs the task, and 3) how frequently the task is carried out. The data were analysed to determine which tasks are carried out by which personnel, which have the most negative consequences when performed poorly, and which are performed the most frequently. This resulted in a list of 9 MCATs (7 speech communication tasks, 1 sound localization task, and 1 sound detection task) that should be prioritised for representation in a measure of auditory fitness for duty (AFFD) for these personnel. Incorporating MCATs in AFFD measures will help to ensure that personnel have the necessary auditory skills for safe and effective deployment on operational duties.
Semeraro, Hannah D.; Bevis, Zoë L.; Rowan, Daniel; van Besouw, Rachel M.; Allsopp, Adrian J.
2015-01-01
The ability to listen to commands in noisy environments and understand acoustic signals, while maintaining situational awareness, is an important skill for military personnel and can be critical for mission success. Seventeen auditory tasks carried out by British infantry and combat-support personnel were identified through a series of focus groups conducted by Bevis et al. For military personnel, these auditory tasks are termed mission-critical auditory tasks (MCATs) if they are carried in out in a military-specific environment and have a negative consequence when performed below a specified level. A questionnaire study was conducted to find out which of the auditory tasks identified by Bevis et al. satisfy the characteristics of an MCAT. Seventy-nine British infantry and combat-support personnel from four regiments across the South of England participated. For each auditory task participants indicated: 1) the consequences of poor performance on the task, 2) who performs the task, and 3) how frequently the task is carried out. The data were analysed to determine which tasks are carried out by which personnel, which have the most negative consequences when performed poorly, and which are performed the most frequently. This resulted in a list of 9 MCATs (7 speech communication tasks, 1 sound localization task, and 1 sound detection task) that should be prioritised for representation in a measure of auditory fitness for duty (AFFD) for these personnel. Incorporating MCATs in AFFD measures will help to ensure that personnel have the necessary auditory skills for safe and effective deployment on operational duties. PMID:25774613
Auditory, visual, and bimodal data link displays and how they support pilot performance.
Steelman, Kelly S; Talleur, Donald; Carbonari, Ronald; Yamani, Yusuke; Nunes, Ashley; McCarley, Jason S
2013-06-01
The design of data link messaging systems to ensure optimal pilot performance requires empirical guidance. The current study examined the effects of display format (auditory, visual, or bimodal) and visual display position (adjacent to instrument panel or mounted on console) on pilot performance. Subjects performed five 20-min simulated single-pilot flights. During each flight, subjects received messages from a simulated air traffic controller. Messages were delivered visually, auditorily, or bimodally. Subjects were asked to read back each message aloud and then perform the instructed maneuver. Visual and bimodal displays engendered lower subjective workload and better altitude tracking than auditory displays. Readback times were shorter with the two unimodal visual formats than with any of the other three formats. Advantages for the unimodal visual format ranged in size from 2.8 s to 3.8 s relative to the bimodal upper left and auditory formats, respectively. Auditory displays allowed slightly more head-up time (3 to 3.5 seconds per minute) than either visual or bimodal displays. Position of the visual display had only modest effects on any measure. Combined with the results from previous studies by Helleberg and Wickens and Lancaster and Casali the current data favor visual and bimodal displays over auditory displays; unimodal auditory displays were favored by only one measure, head-up time, and only very modestly. Data evinced no statistically significant effects of visual display position on performance, suggesting that, contrary to expectations, the placement of a visual data link display may be of relatively little consequence to performance.
Auditory memory function in expert chess players
Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona
2015-01-01
Background: Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. Methods: The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. Results: The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Conclusion: Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time. PMID:26793666
Barone, Pascal; Chambaudie, Laure; Strelnikov, Kuzma; Fraysse, Bernard; Marx, Mathieu; Belin, Pascal; Deguine, Olivier
2016-10-01
Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as face-voice interactions, which are important in social communication. We analyse and compare visual-auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual-auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face-voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorisation task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face-voice interaction; this is important for patient internal supramodal representation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Modalities of memory: is reading lips like hearing voices?
Maidment, David W; Macken, Bill; Jones, Dylan M
2013-12-01
Functional similarities in verbal memory performance across presentation modalities (written, heard, lipread) are often taken to point to a common underlying representational form upon which the modalities converge. We show here instead that the pattern of performance depends critically on presentation modality and different mechanisms give rise to superficially similar effects across modalities. Lipread recency is underpinned by different mechanisms to auditory recency, and while the effect of an auditory suffix on an auditory list is due to the perceptual grouping of the suffix with the list, the corresponding effect with lipread speech is due to misidentification of the lexical content of the lipread suffix. Further, while a lipread suffix does not disrupt auditory recency, an auditory suffix does disrupt recency for lipread lists. However, this effect is due to attentional capture ensuing from the presentation of an unexpected auditory event, and is evident both with verbal and nonverbal auditory suffixes. These findings add to a growing body of evidence that short-term verbal memory performance is determined by modality-specific perceptual and motor processes, rather than by the storage and manipulation of phonological representations. Copyright © 2013 Elsevier B.V. All rights reserved.
Auditory models for speech analysis
NASA Astrophysics Data System (ADS)
Maybury, Mark T.
This paper reviews the psychophysical basis for auditory models and discusses their application to automatic speech recognition. First an overview of the human auditory system is presented, followed by a review of current knowledge gleaned from neurological and psychoacoustic experimentation. Next, a general framework describes established peripheral auditory models which are based on well-understood properties of the peripheral auditory system. This is followed by a discussion of current enhancements to that models to include nonlinearities and synchrony information as well as other higher auditory functions. Finally, the initial performance of auditory models in the task of speech recognition is examined and additional applications are mentioned.
Rizzo, John-Ross; Raghavan, Preeti; McCrery, J R; Oh-Park, Mooyeon; Verghese, Joe
2015-04-01
To evaluate the effect of a novel divided attention task-walking under auditory constraints-on gait performance in older adults and to determine whether this effect was moderated by cognitive status. Validation cohort. General community. Ambulatory older adults without dementia (N=104). Not applicable. In this pilot study, we evaluated walking under auditory constraints in 104 older adults who completed 3 pairs of walking trials on a gait mat under 1 of 3 randomly assigned conditions: 1 pair without auditory stimulation and 2 pairs with emotionally charged auditory stimulation with happy or sad sounds. The mean age of subjects was 80.6±4.9 years, and 63% (n=66) were women. The mean velocity during normal walking was 97.9±20.6cm/s, and the mean cadence was 105.1±9.9 steps/min. The effect of walking under auditory constraints on gait characteristics was analyzed using a 2-factorial analysis of variance with a 1-between factor (cognitively intact and minimal cognitive impairment groups) and a 1-within factor (type of auditory stimuli). In both happy and sad auditory stimulation trials, cognitively intact older adults (n=96) showed an average increase of 2.68cm/s in gait velocity (F1.86,191.71=3.99; P=.02) and an average increase of 2.41 steps/min in cadence (F1.75,180.42=10.12; P<.001) as compared with trials without auditory stimulation. In contrast, older adults with minimal cognitive impairment (Blessed test score, 5-10; n=8) showed an average reduction of 5.45cm/s in gait velocity (F1.87,190.83=5.62; P=.005) and an average reduction of 3.88 steps/min in cadence (F1.79,183.10=8.21; P=.001) under both auditory stimulation conditions. Neither baseline fall history nor performance of activities of daily living accounted for these differences. Our results provide preliminary evidence of the differentiating effect of emotionally charged auditory stimuli on gait performance in older individuals with minimal cognitive impairment compared with those without minimal cognitive impairment. A divided attention task using emotionally charged auditory stimuli might be able to elicit compensatory improvement in gait performance in cognitively intact older individuals, but lead to decompensation in those with minimal cognitive impairment. Further investigation is needed to compare gait performance under this task to gait on other dual-task paradigms and to separately examine the effect of physiological aging versus cognitive impairment on gait during walking under auditory constraints. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Neural circuits in auditory and audiovisual memory.
Plakke, B; Romanski, L M
2016-06-01
Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
Capsaicin Supplementation Reduces Physical Fatigue and Improves Exercise Performance in Mice
Hsu, Yi-Ju; Huang, Wen-Ching; Chiu, Chien-Chao; Liu, Yan-Lin; Chiu, Wan-Chun; Chiu, Chun-Hui; Chiu, Yen-Shuo; Huang, Chi-Chang
2016-01-01
Chili pepper is used as a food, seasoning and has been revered for its medicinal and health claims. It is very popular and is the most common spice worldwide. Capsaicin (CAP) is a major pungent and bioactive phytochemical in chili peppers. CAP has been shown to improve mitochondrial biogenesis and adenosine triphosphate (ATP) production. However, there is limited evidence around the effects of CAP on physical fatigue and exercise performance. The purpose of this study was to evaluate the potential beneficial effects of CAP on anti-fatigue and ergogenic functions following physiological challenge. Female Institute of Cancer Research (ICR) mice from four groups (n = 8 per group) were orally administered CAP for 4 weeks at 0, 205, 410, and 1025 mg/kg/day, which were respectively designated the vehicle, CAP-1X, CAP-2X, and CAP-5X groups. The anti-fatigue activity and exercise performance was evaluated using forelimb grip strength, exhaustive swimming time, and levels of serum lactate, ammonia, glucose, BUN (blood urea nitrogen) and creatine kinase (CK) after a 15-min swimming exercise. The grip strength and exhaustive swimming time of the CAP-5X group were significantly higher than other groups. CAP supplementation dose-dependently reduced serum lactate, ammonia, BUN and CK levels, and increased glucose concentration after the 15-min swimming test. In addition, CAP also increased hepatic glycogen content, an important energy source for exercise. The possible mechanism was relevant to energy homeostasis and the physiological modulations by CAP supplementation. Therefore, our results suggest that CAP supplementation may have a wide spectrum of bioactivities for promoting health, performance improvement and fatigue amelioration. PMID:27775591
Capsaicin Supplementation Reduces Physical Fatigue and Improves Exercise Performance in Mice.
Hsu, Yi-Ju; Huang, Wen-Ching; Chiu, Chien-Chao; Liu, Yan-Lin; Chiu, Wan-Chun; Chiu, Chun-Hui; Chiu, Yen-Shuo; Huang, Chi-Chang
2016-10-20
Chili pepper is used as a food, seasoning and has been revered for its medicinal and health claims. It is very popular and is the most common spice worldwide. Capsaicin (CAP) is a major pungent and bioactive phytochemical in chili peppers. CAP has been shown to improve mitochondrial biogenesis and adenosine triphosphate (ATP) production. However, there is limited evidence around the effects of CAP on physical fatigue and exercise performance. The purpose of this study was to evaluate the potential beneficial effects of CAP on anti-fatigue and ergogenic functions following physiological challenge. Female Institute of Cancer Research (ICR) mice from four groups ( n = 8 per group) were orally administered CAP for 4 weeks at 0, 205, 410, and 1025 mg/kg/day, which were respectively designated the vehicle, CAP-1X, CAP-2X, and CAP-5X groups. The anti-fatigue activity and exercise performance was evaluated using forelimb grip strength, exhaustive swimming time, and levels of serum lactate, ammonia, glucose, BUN (blood urea nitrogen) and creatine kinase (CK) after a 15-min swimming exercise. The grip strength and exhaustive swimming time of the CAP-5X group were significantly higher than other groups. CAP supplementation dose-dependently reduced serum lactate, ammonia, BUN and CK levels, and increased glucose concentration after the 15-min swimming test. In addition, CAP also increased hepatic glycogen content, an important energy source for exercise. The possible mechanism was relevant to energy homeostasis and the physiological modulations by CAP supplementation. Therefore, our results suggest that CAP supplementation may have a wide spectrum of bioactivities for promoting health, performance improvement and fatigue amelioration.
Code of Federal Regulations, 2010 CFR
2010-10-01
... evaluation count towards the statutory cap on administrative costs? 2522.540 Section 2522.540 Public Welfare... measurement or evaluation count towards the statutory cap on administrative costs? No, the costs of performance measurement and evaluation do not count towards the statutory five percent cap on administrative...
Study on the application of the time-compressed speech in children.
Padilha, Fernanda Yasmin Odila Maestri Miguel; Pinheiro, Maria Madalena Canina
2017-11-09
To analyze the performance of children without alteration of central auditory processing in the Time-compressed Speech Test. This is a descriptive, observational, cross-sectional study. Study participants were 22 children aged 7-11 years without central auditory processing disorders. The following instruments were used to assess whether these children presented central auditory processing disorders: Scale of Auditory Behaviors, simplified evaluation of central auditory processing, and Dichotic Test of Digits (binaural integration stage). The Time-compressed Speech Test was applied to the children without auditory changes. The participants presented better performance in the list of monosyllabic words than in the list of disyllabic words, but with no statistically significant difference. No influence on test performance was observed with respect to order of presentation of the lists and the variables gender and ear. Regarding age, difference in performance was observed only in the list of disyllabic words. The mean score of children in the Time-compressed Speech Test was lower than that of adults reported in the national literature. Difference in test performance was observed only with respect to the age variable for the list of disyllabic words. No difference was observed in the order of presentation of the lists or in the type of stimulus.
Heimbauer, Lisa A; Antworth, Rebecca L; Owren, Michael J
2012-01-01
Nonhuman primates appear to capitalize more effectively on visual cues than corresponding auditory versions. For example, studies of inferential reasoning have shown that monkeys and apes readily respond to seeing that food is present ("positive" cuing) or absent ("negative" cuing). Performance is markedly less effective with auditory cues, with many subjects failing to use this input. Extending recent work, we tested eight captive tufted capuchins (Cebus apella) in locating food using positive and negative cues in visual and auditory domains. The monkeys chose between two opaque cups to receive food contained in one of them. Cup contents were either shown or shaken, providing location cues from both cups, positive cues only from the baited cup, or negative cues from the empty cup. As in previous work, subjects readily used both positive and negative visual cues to secure reward. However, auditory outcomes were both similar to and different from those of earlier studies. Specifically, all subjects came to exploit positive auditory cues, but none responded to negative versions. The animals were also clearly different in visual versus auditory performance. Results indicate that a significant proportion of capuchins may be able to use positive auditory cues, with experience and learning likely playing a critical role. These findings raise the possibility that experience may be significant in visually based performance in this task as well, and highlight that coming to grips with evident differences between visual versus auditory processing may be important for understanding primate cognition more generally.
Auditory Signal Processing in Communication: Perception and Performance of Vocal Sounds
Prather, Jonathan F.
2013-01-01
Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. PMID:23827717
ERIC Educational Resources Information Center
Fox, Allison M.; Reid, Corinne L.; Anderson, Mike; Richardson, Cassandra; Bishop, Dorothy V. M.
2012-01-01
According to the rapid auditory processing theory, the ability to parse incoming auditory information underpins learning of oral and written language. There is wide variation in this low-level perceptual ability, which appears to follow a protracted developmental course. We studied the development of rapid auditory processing using event-related…
Neural circuits in Auditory and Audiovisual Memory
Plakke, B.; Romanski, L.M.
2016-01-01
Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. PMID:26656069
ERIC Educational Resources Information Center
Kodak, Tiffany; Clements, Andrea; Paden, Amber R.; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A.
2015-01-01
The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The…
Avey, Marc T.; Hoeschele, Marisa; Moscicki, Michele K.; Bloomfield, Laurie L.; Sturdy, Christopher B.
2011-01-01
Songbird auditory areas (i.e., CMM and NCM) are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise [1]–[2]. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators [3]. Mobbing calls produced in response to smaller, higher-threat predators contain more “D” notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators [4]. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG) expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned. PMID:21909363
Auditory Middle Latency Response and Phonological Awareness in Students with Learning Disabilities
Romero, Ana Carla Leite; Funayama, Carolina Araújo Rodrigues; Capellini, Simone Aparecida; Frizzo, Ana Claudia Figueiredo
2015-01-01
Introduction Behavioral tests of auditory processing have been applied in schools and highlight the association between phonological awareness abilities and auditory processing, confirming that low performance on phonological awareness tests may be due to low performance on auditory processing tests. Objective To characterize the auditory middle latency response and the phonological awareness tests and to investigate correlations between responses in a group of children with learning disorders. Methods The study included 25 students with learning disabilities. Phonological awareness and auditory middle latency response were tested with electrodes placed on the left and right hemispheres. The correlation between the measurements was performed using the Spearman rank correlation coefficient. Results There is some correlation between the tests, especially between the Pa component and syllabic awareness, where moderate negative correlation is observed. Conclusion In this study, when phonological awareness subtests were performed, specifically phonemic awareness, the students showed a low score for the age group, although for the objective examination, prolonged Pa latency in the contralateral via was observed. Negative weak to moderate correlation for Pa wave latency was observed, as was positive weak correlation for Na-Pa amplitude. PMID:26491479
Code of Federal Regulations, 2010 CFR
2010-07-01
... MINING PRODUCTS ELECTRIC CAP LAMPS § 19.9 Performance. In addition to the general design and the safety... respect to performance, as follows: (a) Time of burning and candlepower. Permissible electric cap lamps.... The life of a bulb is the number of hours its main filament will burn in the cap lamp or its...
Neuronal activity in primate auditory cortex during the performance of audiovisual tasks.
Brosch, Michael; Selezneva, Elena; Scheich, Henning
2015-03-01
This study aimed at a deeper understanding of which cognitive and motivational aspects of tasks affect auditory cortical activity. To this end we trained two macaque monkeys to perform two different tasks on the same audiovisual stimulus and to do this with two different sizes of water rewards. The monkeys had to touch a bar after a tone had been turned on together with an LED, and to hold the bar until either the tone (auditory task) or the LED (visual task) was turned off. In 399 multiunits recorded from core fields of auditory cortex we confirmed that during task engagement neurons responded to auditory and non-auditory stimuli that were task-relevant, such as light and water. We also confirmed that firing rates slowly increased or decreased for several seconds during various phases of the tasks. Responses to non-auditory stimuli and slow firing changes were observed during both the auditory and the visual task, with some differences between them. There was also a weak task-dependent modulation of the responses to auditory stimuli. In contrast to these cognitive aspects, motivational aspects of the tasks were not reflected in the firing, except during delivery of the water reward. In conclusion, the present study supports our previous proposal that there are two response types in the auditory cortex that represent the timing and the type of auditory and non-auditory elements of a auditory tasks as well the association between elements. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Chang, Young-Soo; Hong, Sung Hwa; Kim, Eun Yeon; Choi, Ji Eun; Chung, Won-Ho; Cho, Yang-Sun; Moon, Il Joon
2018-05-18
Despite recent advancement in the prediction of cochlear implant outcome, the benefit of bilateral procedures compared to bimodal stimulation and how we predict speech perception outcomes of sequential bilateral cochlear implant based on bimodal auditory performance in children remain unclear. This investigation was performed: (1) to determine the benefit of sequential bilateral cochlear implant and (2) to identify the associated factors for the outcome of sequential bilateral cochlear implant. Observational and retrospective study. We retrospectively analyzed 29 patients with sequential cochlear implant following bimodal-fitting condition. Audiological evaluations were performed; the categories of auditory performance scores, speech perception with monosyllable and disyllables words, and the Korean version of Ling. Audiological evaluations were performed before sequential cochlear implant with the bimodal fitting condition (CI1+HA) and one year after the sequential cochlear implant with bilateral cochlear implant condition (CI1+CI2). The good Performance Group (GP) was defined as follows; 90% or higher in monosyllable and bisyllable tests with auditory-only condition or 20% or higher improvement of the scores with CI1+CI2. Age at first implantation, inter-implant interval, categories of auditory performance score, and various comorbidities were analyzed by logistic regression analysis. Compared to the CI1+HA, CI1+CI2 provided significant benefit in categories of auditory performance, speech perception, and Korean version of Ling results. Preoperative categories of auditory performance scores were the only associated factor for being GP (odds ratio=4.38, 95% confidence interval - 95%=1.07-17.93, p=0.04). The children with limited language development in bimodal condition should be considered as the sequential bilateral cochlear implant and preoperative categories of auditory performance score could be used as the predictor in speech perception after sequential cochlear implant. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Visser, Eelke; Zwiers, Marcel P; Kan, Cornelis C; Hoekstra, Liesbeth; van Opstal, A John; Buitelaar, Jan K
2013-11-01
Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs.
Behavioral Measures of Auditory Streaming in Ferrets (Mustela putorius)
Ma, Ling; Yin, Pingbo; Micheyl, Christophe; Oxenham, Andrew J.; Shamma, Shihab A.
2015-01-01
An important aspect of the analysis of auditory “scenes” relates to the perceptual organization of sound sequences into auditory “streams.” In this study, we adapted two auditory perception tasks, used in recent human psychophysical studies, to obtain behavioral measures of auditory streaming in ferrets (Mustela putorius). One task involved the detection of shifts in the frequency of tones within an alternating tone sequence. The other task involved the detection of a stream of regularly repeating target tones embedded within a randomly varying multitone background. In both tasks, performance was measured as a function of various stimulus parameters, which previous psychophysical studies in humans have shown to influence auditory streaming. Ferret performance in the two tasks was found to vary as a function of these parameters in a way that is qualitatively consistent with the human data. These results suggest that auditory streaming occurs in ferrets, and that the two tasks described here may provide a valuable tool in future behavioral and neurophysiological studies of the phenomenon. PMID:20695663
NASA Astrophysics Data System (ADS)
Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica
2005-12-01
This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.
Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru
2016-01-01
The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060
Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru
2017-03-01
The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Bittner, Rachel M.; Anderson, Mark R.
2012-01-01
Auditory communication displays within the NextGen data link system may use multiple synthetic speech messages replacing traditional ATC and company communications. The design of an interface for selecting amongst multiple incoming messages can impact both performance (time to select, audit and release a message) and preference. Two design factors were evaluated: physical pressure-sensitive switches versus flat panel "virtual switches", and the presence or absence of auditory feedback from switch contact. Performance with stimuli using physical switches was 1.2 s faster than virtual switches (2.0 s vs. 3.2 s); auditory feedback provided a 0.54 s performance advantage (2.33 s vs. 2.87 s). There was no interaction between these variables. Preference data were highly correlated with performance.
Petrac, D C; Bedwell, J S; Renk, K; Orem, D M; Sims, V
2009-07-01
There have been relatively few studies on the relationship between recent perceived environmental stress and cognitive performance, and the existing studies do not control for state anxiety during the cognitive testing. The current study addressed this need by examining recent self-reported environmental stress and divided attention performance, while controlling for state anxiety. Fifty-four university undergraduates who self-reported a wide range of perceived recent stress (10-item perceived stress scale) completed both single and dual (simultaneous auditory and visual stimuli) continuous performance tests. Partial correlation analysis showed a statistically significant positive correlation between perceived stress and the auditory omission errors from the dual condition, after controlling for state anxiety and auditory omission errors from the single condition (r = 0.41). This suggests that increased environmental stress relates to decreased divided attention performance in auditory vigilance. In contrast, an increase in state anxiety (controlling for perceived stress) was related to a decrease in auditory omission errors from the dual condition (r = - 0.37), which suggests that state anxiety may improve divided attention performance. Results suggest that further examination of the neurobiological consequences of environmental stress on divided attention and other executive functioning tasks is needed.
Frequency encoded auditory display of the critical tracking task
NASA Technical Reports Server (NTRS)
Stevenson, J.
1984-01-01
The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.
Auditory short-term memory in the primate auditory cortex.
Scott, Brian H; Mishkin, Mortimer
2016-06-01
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.
Auditory neuropathy spectrum disorder in late preterm and term infants with severe jaundice.
Saluja, Satish; Agarwal, Asha; Kler, Neelam; Amin, Sanjiv
2010-11-01
To evaluate if severe jaundice is associated with acute auditory neuropathy spectrum disorder in otherwise healthy late preterm and term neonates. In a prospective observational study, all neonates who were admitted with severe jaundice at which exchange transfusion may be indicated as per American Academy of Pediatrics guidelines had comprehensive auditory evaluation performed before discharge to home. Neonates with infection, perinatal asphyxia, chromosomal disorders, cranio-facial malformations, or family history of childhood hearing loss were excluded. Comprehensive auditory evaluations (tympanometry, oto-acoustic emission tests, and auditory brainstem evoked responses) were performed by an audiologist unaware of the severity of jaundice. Total serum bilirubin and serum albumin were measured at the institutional chemistry laboratory using the Diazo and Bromocresol purple method, respectively. A total of 13 neonates with total serum bilirubin concentration at which exchange transfusion is indicated as per American Academy of Pediatrics were admitted to the Neonatal Intensive Care Unit over 3 month period. Six out of 13 neonates (46%) had audiological findings of acute auditory neuropathy spectrum disorder. There was no significant difference in gestational age, birth weight, hemolysis, serum albumin concentration, peak total serum bilirubin concentrations, and peak bilirubin:albumin molar ratio between six neonates who developed acute auditory neuropathy and seven neonates who had normal audiological findings. Only two out of six infants with auditory neuropathy spectrum disorder had clinical signs and symptoms of acute bilirubin encephalopathy. Our findings strongly suggest that auditory neuropathy spectrum disorder is a common manifestation of acute bilirubin-induced neurotoxicity in late preterm and term infants with severe jaundice. Our findings also suggest that comprehensive auditory evaluations should be routinely performed in neonates with severe jaundice irrespective of the presence of clinical findings of acute bilirubin encephalopathy. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Operator Performance Measures for Assessing Voice Communication Effectiveness
1989-07-01
performance and work- load assessment techniques have been based.I Broadbent (1958) described a limited capacity filter model of human information...INFORMATION PROCESSING 20 3.1.1. Auditory Attention 20 3.1.2. Auditory Memory 24 3.2. MODELS OF INFORMATION PROCESSING 24 3.2.1. Capacity Theories 25...Learning 0 Attention * Language Specialization • Decision Making• Problem Solving Auditory Information Processing Models of Processing Ooemtor
Pérez-Valenzuela, Catherine; Gárate-Pérez, Macarena F.; Sotomayor-Zárate, Ramón; Delano, Paul H.; Dagnino-Subiabre, Alexies
2016-01-01
Chronic stress impairs auditory attention in rats and monoamines regulate neurotransmission in the primary auditory cortex (A1), a brain area that modulates auditory attention. In this context, we hypothesized that norepinephrine (NE) levels in A1 correlate with the auditory attention performance of chronically stressed rats. The first objective of this research was to evaluate whether chronic stress affects monoamines levels in A1. Male Sprague–Dawley rats were subjected to chronic stress (restraint stress) and monoamines levels were measured by high performance liquid chromatographer (HPLC)-electrochemical detection. Chronically stressed rats had lower levels of NE in A1 than did controls, while chronic stress did not affect serotonin (5-HT) and dopamine (DA) levels. The second aim was to determine the effects of reboxetine (a selective inhibitor of NE reuptake) on auditory attention and NE levels in A1. Rats were trained to discriminate between two tones of different frequencies in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance of ≥80% correct trials in the 2-ACT were randomly assigned to control and stress experimental groups. To analyze the effects of chronic stress on the auditory task, trained rats of both groups were subjected to 50 2-ACT trials 1 day before and 1 day after of the chronic stress period. A difference score (DS) was determined by subtracting the number of correct trials after the chronic stress protocol from those before. An unexpected result was that vehicle-treated control rats and vehicle-treated chronically stressed rats had similar performances in the attentional task, suggesting that repeated injections with vehicle were stressful for control animals and deteriorated their auditory attention. In this regard, both auditory attention and NE levels in A1 were higher in chronically stressed rats treated with reboxetine than in vehicle-treated animals. These results indicate that NE has a key role in A1 and attention of stressed rats during tone discrimination. PMID:28082872
Karimi, D; Mondor, T A; Mann, D D
2008-01-01
The operation of agricultural vehicles is a multitask activity that requires proper distribution of attentional resources. Human factors theories suggest that proper utilization of the operator's sensory capacities under such conditions can improve the operator's performance and reduce the operator's workload. Using a tractor driving simulator, this study investigated whether auditory cues can be used to improve performance of the operator of an agricultural vehicle. Steering of a vehicle was simulated in visual mode (where driving error was shown to the subject using a lightbar) and in auditory mode (where a pair of speakers were used to convey the driving error direction and/or magnitude). A secondary task was also introduced in order to simulate the monitoring of an attached machine. This task included monitoring of two identical displays, which were placed behind the simulator, and responding to them, when needed, using a joystick. This task was also implemented in auditory mode (in which a beep signaled the subject to push the proper button when a response was needed) and in visual mode (in which there was no beep and visual, monitoring of the displays was necessary). Two levels of difficulty of the monitoring task were used. Deviation of the simulated vehicle from a desired straight line was used as the measure of performance in the steering task, and reaction time to the displays was used as the measure of performance in the monitoring task. Results of the experiments showed that steering performance was significantly better when steering was a visual task (driving errors were 40% to 60% of the driving errors in auditory mode), although subjective evaluations showed that auditory steering could be easier, depending on the implementation. Performance in the monitoring task was significantly better for auditory implementation (reaction time was approximately 6 times shorter), and this result was strongly supported by subjective ratings. The majority of the subjects preferred the combination of visual mode for the steering task and auditory mode for the monitoring task.
Iizuka, Masahiro; Etou, Takeshi; Kumagai, Makoto; Matsuoka, Atsushi; Numata, Yuka; Sagara, Shiho
2017-01-01
Objective This study was performed to confirm the efficacy of long-interval cytapheresis on steroid-dependent ulcerative colitis (UC). Methods To discontinue steroids in patients with steroid-dependent UC, we previously designed a novel regimen of cytapheresis (CAP), which we termed “long-interval cytapheresis (LI-CAP)”, in which CAP was performed as one session every two or three weeks and continued during the whole period of tapering steroid dosage. In this study, we performed LI-CAP therapy 20 times (11 male and 9 female; mean age 41.8 years) between April 2010 and April 2015 for 14 patients with steroid-dependent UC. We evaluated the effectiveness of LI-CAP by examining the improvement in Lichtiger's clinical activity index (CAI), the rate of clinical remission, and the rate of steroid discontinuation. We further examined the rate of sustained steroid-free clinical remission at 6 and 12 months after LI-CAP in patients who successfully discontinued steroid-use after LI-CAP. The primary endpoint was the rate of discontinuation of steroids after LI-CAP. Results The mean CAI score before LI-CAP (7.550) significantly decreased to 1.65 after LI-CAP (p<0.0001). The rate of clinical remission after LI-CAP was 80%. The rate of steroid discontinuation after LI-CAP was 60.0%. The mean dose of daily prednisolone was significantly decreased after LI-CAP (2.30 mg) compared with that before therapy (17.30 mg) (p=0.0003). The rate of sustained steroid-free clinical remission after LI-CAP was 66.7% at 6 months and 66.7% at 12 months. Conclusion We confirmed that LI-CAP has therapeutic effects on reducing the dosage and discontinuing steroids in patients with steroid-dependent UC. PMID:28924114
Iizuka, Masahiro; Etou, Takeshi; Kumagai, Makoto; Matsuoka, Atsushi; Numata, Yuka; Sagara, Shiho
2017-10-15
Objective This study was performed to confirm the efficacy of long-interval cytapheresis on steroid-dependent ulcerative colitis (UC). Methods To discontinue steroids in patients with steroid-dependent UC, we previously designed a novel regimen of cytapheresis (CAP), which we termed "long-interval cytapheresis (LI-CAP)", in which CAP was performed as one session every two or three weeks and continued during the whole period of tapering steroid dosage. In this study, we performed LI-CAP therapy 20 times (11 male and 9 female; mean age 41.8 years) between April 2010 and April 2015 for 14 patients with steroid-dependent UC. We evaluated the effectiveness of LI-CAP by examining the improvement in Lichtiger's clinical activity index (CAI), the rate of clinical remission, and the rate of steroid discontinuation. We further examined the rate of sustained steroid-free clinical remission at 6 and 12 months after LI-CAP in patients who successfully discontinued steroid-use after LI-CAP. The primary endpoint was the rate of discontinuation of steroids after LI-CAP. Results The mean CAI score before LI-CAP (7.550) significantly decreased to 1.65 after LI-CAP (p<0.0001). The rate of clinical remission after LI-CAP was 80%. The rate of steroid discontinuation after LI-CAP was 60.0%. The mean dose of daily prednisolone was significantly decreased after LI-CAP (2.30 mg) compared with that before therapy (17.30 mg) (p=0.0003). The rate of sustained steroid-free clinical remission after LI-CAP was 66.7% at 6 months and 66.7% at 12 months. Conclusion We confirmed that LI-CAP has therapeutic effects on reducing the dosage and discontinuing steroids in patients with steroid-dependent UC.
[Auditory processing evaluation in children born preterm].
Gallo, Júlia; Dias, Karin Ziliotto; Pereira, Liliane Desgualdo; Azevedo, Marisa Frasson de; Sousa, Elaine Colombo
2011-01-01
To verify the performance of children born preterm on auditory processing evaluation, and to correlate the data with behavioral hearing assessment carried out at 12 months of age, comparing the results to those of auditory processing evaluation of children born full-term. Participants were 30 children with ages between 4 and 7 years, who were divided into two groups: Group 1 (children born preterm), and Group 2 (children born full-term). The auditory processing results of Group 1 were correlated to data obtained from the behavioral auditory evaluation carried out at 12 months of age. The results were compared between groups. Subjects in Group 1 presented at least one risk indicator for hearing loss at birth. In the behavioral auditory assessment carried out at 12 months of age, 38% of the children in Group 1 were at risk for central auditory processing deficits, and 93.75% presented auditory processing deficits on the evaluation. Significant differences were found between the groups for the temporal order test, the PSI test with ipsilateral competitive message, and the speech-in-noise test. The delay in sound localization ability was associated to temporal processing deficits. Children born preterm have worse performance in auditory processing evaluation than children born full-term. Delay in sound localization at 12 months is associated to deficits on the physiological mechanism of temporal processing in the auditory processing evaluation carried out between 4 and 7 years.
Auditory short-term memory in the primate auditory cortex
Scott, Brian H.; Mishkin, Mortimer
2015-01-01
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ‘working memory’ bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ‘match’ stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. PMID:26541581
Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier
2007-08-29
In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.
Animation control of surface motion capture.
Tejera, Margara; Casas, Dan; Hilton, Adrian
2013-12-01
Surface motion capture (SurfCap) of actor performance from multiple view video provides reconstruction of the natural nonrigid deformation of skin and clothing. This paper introduces techniques for interactive animation control of SurfCap sequences which allow the flexibility in editing and interactive manipulation associated with existing tools for animation from skeletal motion capture (MoCap). Laplacian mesh editing is extended using a basis model learned from SurfCap sequences to constrain the surface shape to reproduce natural deformation. Three novel approaches for animation control of SurfCap sequences, which exploit the constrained Laplacian mesh editing, are introduced: 1) space–time editing for interactive sequence manipulation; 2) skeleton-driven animation to achieve natural nonrigid surface deformation; and 3) hybrid combination of skeletal MoCap driven and SurfCap sequence to extend the range of movement. These approaches are combined with high-level parametric control of SurfCap sequences in a hybrid surface and skeleton-driven animation control framework to achieve natural surface deformation with an extended range of movement by exploiting existing MoCap archives. Evaluation of each approach and the integrated animation framework are presented on real SurfCap sequences for actors performing multiple motions with a variety of clothing styles. Results demonstrate that these techniques enable flexible control for interactive animation with the natural nonrigid surface dynamics of the captured performance and provide a powerful tool to extend current SurfCap databases by incorporating new motions from MoCap sequences.
Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.
2012-01-01
We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068
Modeling active capping efficacy. 1. Metal and organometal contaminated sediment remediation.
Viana, Priscilla Z; Yin, Ke; Rockne, Karl J
2008-12-01
Cd, Cr, Pb, Ag, As, Ba, Hg, CH3Hg, and CN transport through sand, granular activated carbon (GAC), organoclay, shredded tires, and apatite caps was modeled by deterministic and Monte Carlo methods. Time to 10% breakthrough, 30 and 100 yr cumulative release were metrics of effectiveness. Effective caps prevented above-cap concentrations from exceeding USEPA acute criteria at 100 yr assuming below-cap concentrations at solubility. Sand caps performed best under diffusion due to the greater diffusive path length. Apatite had the best advective performance for Cd, Cr, and Pb. Organoclay performed best for Ag, As, Ba, CH3Hg, and CN. Organoclay and apatite were equally effective for Hg. Monte Carlo analysis was used to determine output sensitivity. Sand was effective under diffusion for Cr within the 50% confidence interval (CI), for Cd and Pb (75% CI), and for As, Hg, and CH3Hg (95% CI). Under diffusion and advection, apatite was effective for Cd, Pb, and Hg (75% CI) and organoclay was effective for Hg and CH3Hg (50% CI). GAC and shredded tires performed relatively poorly. Although no single cap is a panacea, apatite and organoclay have the broadest range of effectiveness. Cap performance is most sensitive to the partitioning coefficient and hydraulic conductivity, indicating the importance of accurate site-specific measurement for these parameters.
Awake craniotomy for assisting placement of auditory brainstem implant in NF2 patients.
Zhou, Qiangyi; Yang, Zhijun; Wang, Zhenmin; Wang, Bo; Wang, Xingchao; Zhao, Chi; Zhang, Shun; Wu, Tao; Li, Peng; Li, Shiwei; Zhao, Fu; Liu, Pinan
2018-06-01
Auditory brainstem implants (ABIs) may be the only opportunity for patients with NF2 to regain some sense of hearing sensation. However, only a very small number of individuals achieved open-set speech understanding and high sentence scores. Suboptimal placement of the ABI electrode array over the cochlear nucleus may be one of main factors for poor auditory performance. In the current study, we present a method of awake craniotomy to assist with ABI placement. Awake surgery and hearing test via the retrosigmoid approach were performed for vestibular schwannoma resections and auditory brainstem implantations in four patients with NF2. Auditory outcomes and complications were assessed postoperatively. Three of 4 patients who underwent awake craniotomy during ABI surgery received reproducible auditory sensations intraoperatively. Satisfactory numbers of effective electrodes, threshold levels and distinct pitches were achieved in the wake-up hearing test. In addition, relatively few electrodes produced non-auditory percepts. There was no serious complication attributable to the ABI or awake craniotomy. It is safe and well tolerated for neurofibromatosis type 2 (NF2) patients using awake craniotomy during auditory brainstem implantation. This method can potentially improve the localization accuracy of the cochlear nucleus during surgery.
Combined Auditory and Vibrotactile Feedback for Human-Machine-Interface Control.
Thorp, Elias B; Larson, Eric; Stepp, Cara E
2014-01-01
The purpose of this study was to determine the effect of the addition of binary vibrotactile stimulation to continuous auditory feedback (vowel synthesis) for human-machine interface (HMI) control. Sixteen healthy participants controlled facial surface electromyography to achieve 2-D targets (vowels). Eight participants used only real-time auditory feedback to locate targets whereas the other eight participants were additionally alerted to having achieved targets with confirmatory vibrotactile stimulation at the index finger. All participants trained using their assigned feedback modality (auditory alone or combined auditory and vibrotactile) over three sessions on three days and completed a fourth session on the third day using novel targets to assess generalization. Analyses of variance performed on the 1) percentage of targets reached and 2) percentage of trial time at the target revealed a main effect for feedback modality: participants using combined auditory and vibrotactile feedback performed significantly better than those using auditory feedback alone. No effect was found for session or the interaction of feedback modality and session, indicating a successful generalization to novel targets but lack of improvement over training sessions. Future research is necessary to determine the cognitive cost associated with combined auditory and vibrotactile feedback during HMI control.
Visser, Eelke; Zwiers, Marcel P.; Kan, Cornelis C.; Hoekstra, Liesbeth; van Opstal, A. John; Buitelaar, Jan K.
2013-01-01
Background Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. Methods We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Results Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. Limitations The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Conclusion Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs. PMID:24148845
Porges, Stephen W; Macellaio, Matthew; Stanfill, Shannon D; McCue, Kimberly; Lewis, Gregory F; Harden, Emily R; Handelman, Mika; Denver, John; Bazhenova, Olga V; Heilman, Keri J
2013-06-01
The current study evaluated processes underlying two common symptoms (i.e., state regulation problems and deficits in auditory processing) associated with a diagnosis of autism spectrum disorders. Although these symptoms have been treated in the literature as unrelated, when informed by the Polyvagal Theory, these symptoms may be viewed as the predictable consequences of depressed neural regulation of an integrated social engagement system, in which there is down regulation of neural influences to the heart (i.e., via the vagus) and to the middle ear muscles (i.e., via the facial and trigeminal cranial nerves). Respiratory sinus arrhythmia (RSA) and heart period were monitored to evaluate state regulation during a baseline and two auditory processing tasks (i.e., the SCAN tests for Filtered Words and Competing Words), which were used to evaluate auditory processing performance. Children with a diagnosis of autism spectrum disorders (ASD) were contrasted with aged matched typically developing children. The current study identified three features that distinguished the ASD group from a group of typically developing children: 1) baseline RSA, 2) direction of RSA reactivity, and 3) auditory processing performance. In the ASD group, the pattern of change in RSA during the attention demanding SCAN tests moderated the relation between performance on the Competing Words test and IQ. In addition, in a subset of ASD participants, auditory processing performance improved and RSA increased following an intervention designed to improve auditory processing. Copyright © 2012 Elsevier B.V. All rights reserved.
Visual performance for trip hazard detection when using incandescent and led miner cap lamps.
Sammarco, John J; Gallagher, Sean; Reyes, Miguel
2010-04-01
Accident data for 2003-2007 indicate that slip, trip, and falls (STFs) are the second leading accident class (17.8%, n=2,441) of lost-time injuries in underground mining. Proper lighting plays a critical role in enabling miners to detect STF hazards in this environment. Often, the only lighting available to the miner is from a cap lamp worn on the miner's helmet. The focus of this research was to determine if the spectral content of light from light-emitting diode (LED) cap lamps enabled visual performance improvements for the detection of tripping hazards as compared to incandescent cap lamps that are traditionally used in underground mining. A secondary objective was to determine the effects of aging on visual performance. The visual performance of 30 subjects was quantified by measuring each subject's speed and accuracy in detecting objects positioned on the floor both in the near field, at 1.83 meters, and far field, at 3.66 meters. Near field objects were positioned at 0 degrees and +/-20 degrees off axis, while far field objects were positioned at 0 degrees and +/-10 degrees off axis. Three age groups were designated: group A consisted of subjects 18 to 25 years old, group B consisted of subjects 40 to 50 years old, and group C consisted of subjects 51 years and older. Results of the visual performance comparison for a commercially available LED, a prototype LED, and an incandescent cap lamp indicate that the location of objects on the floor, the type of cap lamp used, and subject age all had significant influences on the time required to identify potential trip hazards. The LED-based cap lamps enabled detection times that were an average of 0.96 seconds faster compared to the incandescent cap lamp. Use of the LED cap lamps resulted in average detection times that were about 13.6% faster than those recorded for the incandescent cap lamp. The visual performance differences between the commercially available LED and prototype LED cap lamp were not statistically significant. It can be inferred from this data that the spectral content from LED-based cap lamps could enable significant visual performance improvements for miners in the detection of trip hazards. Published by Elsevier Ltd.
Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias
2018-01-01
Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637
Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath
2018-05-24
Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Effect of rhythmic auditory cueing on parkinsonian gait: A systematic review and meta-analysis.
Ghai, Shashank; Ghai, Ishan; Schmitz, Gerd; Effenberg, Alfred O
2018-01-11
The use of rhythmic auditory cueing to enhance gait performance in parkinsonian patients' is an emerging area of interest. Different theories and underlying neurophysiological mechanisms have been suggested for ascertaining the enhancement in motor performance. However, a consensus as to its effects based on characteristics of effective stimuli, and training dosage is still not reached. A systematic review and meta-analysis was carried out to analyze the effects of different auditory feedbacks on gait and postural performance in patients affected by Parkinson's disease. Systematic identification of published literature was performed adhering to PRISMA guidelines, from inception until May 2017, on online databases; Web of science, PEDro, EBSCO, MEDLINE, Cochrane, EMBASE and PROQUEST. Of 4204 records, 50 studies, involving 1892 participants met our inclusion criteria. The analysis revealed an overall positive effect on gait velocity, stride length, and a negative effect on cadence with application of auditory cueing. Neurophysiological mechanisms, training dosage, effects of higher information processing constraints, and use of cueing as an adjunct with medications are thoroughly discussed. This present review bridges the gaps in literature by suggesting application of rhythmic auditory cueing in conventional rehabilitation approaches to enhance motor performance and quality of life in the parkinsonian community.
Experience and information loss in auditory and visual memory.
Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K
2017-07-01
Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.
The effects of early auditory-based intervention on adult bilateral cochlear implant outcomes.
Lim, Stacey R
2017-09-01
The goal of this exploratory study was to determine the types of improvement that sequentially implanted auditory-verbal and auditory-oral adults with prelingual and childhood hearing loss received in bilateral listening conditions, compared to their best unilateral listening condition. Five auditory-verbal adults and five auditory-oral adults were recruited for this study. Participants were seated in the center of a 6-loudspeaker array. BKB-SIN sentences were presented from 0° azimuth, while multi-talker babble was presented from various loudspeakers. BKB-SIN scores in bilateral and the best unilateral listening conditions were compared to determine the amount of improvement gained. As a group, the participants had improved speech understanding scores in the bilateral listening condition. Although not statistically significant, the auditory-verbal group tended to have greater speech understanding with greater levels of competing background noise, compared to the auditory-oral participants. Bilateral cochlear implantation provides individuals with prelingual and childhood hearing loss with improved speech understanding in noise. A higher emphasis on auditory development during the critical language development years may add to increased speech understanding in adulthood. However, other demographic factors such as age or device characteristics must also be considered. Although both auditory-verbal and auditory-oral approaches emphasize spoken language development, they emphasize auditory development to different degrees. This may affect cochlear implant (CI) outcomes. Further consideration should be made in future auditory research to determine whether these differences contribute to performance outcomes. Additional investigation with a larger participant pool, controlled for effects of age and CI devices and processing strategies, would be necessary to determine whether language learning approaches are associated with different levels of speech understanding performance.
Early but not late-blindness leads to enhanced auditory perception.
Wan, Catherine Y; Wood, Amanda G; Reutens, David C; Wilson, Sarah J
2010-01-01
The notion that blindness leads to superior non-visual abilities has been postulated for centuries. Compared to sighted individuals, blind individuals show different patterns of brain activation when performing auditory tasks. To date, no study has controlled for musical experience, which is known to influence auditory skills. The present study tested 33 blind (11 congenital, 11 early-blind, 11 late-blind) participants and 33 matched sighted controls. We showed that the performance of blind participants was better than that of sighted participants on a range of auditory perception tasks, even when musical experience was controlled for. This advantage was observed only for individuals who became blind early in life, and was even more pronounced for individuals who were blind from birth. Years of blindness did not predict task performance. Here, we provide compelling evidence that superior auditory abilities in blind individuals are not explained by musical experience alone. These results have implications for the development of sensory substitution devices, particularly for late-blind individuals.
Bharadwaj, Sneha V; Maricle, Denise; Green, Laura; Allman, Tamby
2015-10-01
The objective of the study was to examine short-term memory and working memory through both visual and auditory tasks in school-age children with cochlear implants. The relationship between the performance on these cognitive skills and reading as well as language outcomes were examined in these children. Ten children between the ages of 7 and 11 years with early-onset bilateral severe-profound hearing loss participated in the study. Auditory and visual short-term memory, auditory and visual working memory subtests and verbal knowledge measures were assessed using the Woodcock Johnson III Tests of Cognitive Abilities, the Wechsler Intelligence Scale for Children-IV Integrated and the Kaufman Assessment Battery for Children II. Reading outcomes were assessed using the Woodcock Reading Mastery Test III. Performance on visual short-term memory and visual working memory measures in children with cochlear implants was within the average range when compared to the normative mean. However, auditory short-term memory and auditory working memory measures were below average when compared to the normative mean. Performance was also below average on all verbal knowledge measures. Regarding reading outcomes, children with cochlear implants scored below average for listening and passage comprehension tasks and these measures were positively correlated to visual short-term memory, visual working memory and auditory short-term memory. Performance on auditory working memory subtests was not related to reading or language outcomes. The children with cochlear implants in this study demonstrated better performance in visual (spatial) working memory and short-term memory skills than in auditory working memory and auditory short-term memory skills. Significant positive relationships were found between visual working memory and reading outcomes. The results of the study provide support for the idea that WM capacity is modality specific in children with hearing loss. Based on these findings, reading instruction that capitalizes on the strengths in visual short-term memory and working memory is suggested for young children with early-onset hearing loss. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Cogné, Mélanie; Knebel, Jean-François; Klinger, Evelyne; Bindschaedler, Claire; Rapin, Pierre-André; Joseph, Pierre-Alain; Clarke, Stephanie
2018-01-01
Topographical disorientation is a frequent deficit among patients suffering from brain injury. Spatial navigation can be explored in this population using virtual reality environments, even in the presence of motor or sensory disorders. Furthermore, the positive or negative impact of specific stimuli can be investigated. We studied how auditory stimuli influence the performance of brain-injured patients in a navigational task, using the Virtual Action Planning-Supermarket (VAP-S) with the addition of contextual ("sonar effect" and "name of product") and non-contextual ("periodic randomised noises") auditory stimuli. The study included 22 patients with a first unilateral hemispheric brain lesion and 17 healthy age-matched control subjects. After a software familiarisation, all subjects were tested without auditory stimuli, with a sonar effect or periodic random sounds in a random order, and with the stimulus "name of product". Contextual auditory stimuli improved patient performance more than control group performance. Contextual stimuli benefited most patients with severe executive dysfunction or with severe unilateral neglect. These results indicate that contextual auditory stimuli are useful in the assessment of navigational abilities in brain-damaged patients and that they should be used in rehabilitation paradigms.
A novel hybrid auditory BCI paradigm combining ASSR and P300.
Kaongoen, Netiwit; Jo, Sungho
2017-03-01
Brain-computer interface (BCI) is a technology that provides an alternative way of communication by translating brain activities into digital commands. Due to the incapability of using the vision-dependent BCI for patients who have visual impairment, auditory stimuli have been used to substitute the conventional visual stimuli. This paper introduces a hybrid auditory BCI that utilizes and combines auditory steady state response (ASSR) and spatial-auditory P300 BCI to improve the performance for the auditory BCI system. The system works by simultaneously presenting auditory stimuli with different pitches and amplitude modulation (AM) frequencies to the user with beep sounds occurring randomly between all sound sources. Attention to different auditory stimuli yields different ASSR and beep sounds trigger the P300 response when they occur in the target channel, thus the system can utilize both features for classification. The proposed ASSR/P300-hybrid auditory BCI system achieves 85.33% accuracy with 9.11 bits/min information transfer rate (ITR) in binary classification problem. The proposed system outperformed the P300 BCI system (74.58% accuracy with 4.18 bits/min ITR) and the ASSR BCI system (66.68% accuracy with 2.01 bits/min ITR) in binary-class problem. The system is completely vision-independent. This work demonstrates that combining ASSR and P300 BCI into a hybrid system could result in a better performance and could help in the development of the future auditory BCI. Copyright © 2017 Elsevier B.V. All rights reserved.
Effect of rhythmic auditory cueing on gait in cerebral palsy: a systematic review and meta-analysis.
Ghai, Shashank; Ghai, Ishan; Effenberg, Alfred O
2018-01-01
Auditory entrainment can influence gait performance in movement disorders. The entrainment can incite neurophysiological and musculoskeletal changes to enhance motor execution. However, a consensus as to its effects based on gait in people with cerebral palsy is still warranted. A systematic review and meta-analysis were carried out to analyze the effects of rhythmic auditory cueing on spatiotemporal and kinematic parameters of gait in people with cerebral palsy. Systematic identification of published literature was performed adhering to Preferred Reporting Items for Systematic Reviews and Meta-Analyses and American Academy for Cerebral Palsy and Developmental Medicine guidelines, from inception until July 2017, on online databases: Web of Science, PEDro, EBSCO, Medline, Cochrane, Embase and ProQuest. Kinematic and spatiotemporal gait parameters were evaluated in a meta-analysis across studies. Of 547 records, nine studies involving 227 participants (108 children/119 adults) met our inclusion criteria. The qualitative review suggested beneficial effects of rhythmic auditory cueing on gait performance among all included studies. The meta-analysis revealed beneficial effects of rhythmic auditory cueing on gait dynamic index (Hedge's g =0.9), gait velocity (1.1), cadence (0.3), and stride length (0.5). This review for the first time suggests a converging evidence toward application of rhythmic auditory cueing to enhance gait performance and stability in people with cerebral palsy. This article details underlying neurophysiological mechanisms and use of cueing as an efficient home-based intervention. It bridges gaps in the literature, and suggests translational approaches on how rhythmic auditory cueing can be incorporated in rehabilitation approaches to enhance gait performance in people with cerebral palsy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Limbach, W.E.; Ratzlaff, T.D.; Anderson, J.E.
1994-12-31
The Protective Cap/Biobarrier Experiment (PCBE), initiated in 1993 at the Idaho National Engineering Laboratory (INEL), is a strip-split plot experiment with three replications designed to rigorously test a 2.0-m loessal soil cap against a cap recommended by the US Environmental Protection Agency and two caps with biological intrusion barriers. Past research at INEL indicates that it should be possible to exclude water from buried wastes using natural materials and natural processes in arid environments rather than expensive materials (geotextiles) and highly engineered caps. The PCBE will also test the effects of two vegetal covers and three irrigation levels on capmore » performance. Drainage pans, located at the bottom of each plot, will monitor cap failure. Soil water profiles will be monitored biweekly by neutron probe and continuously by time domain reflectometry. The performance of each cap design will be monitored under a variety of conditions through 1998. From 1994 to 1996, the authors will assess plant establishment, rooting depths, patterns of moisture extraction and their interactions among caps, vegetal covers, and irrigation levels. In 1996, they will introduce ants and burrowing mammals to test the structural integrity of each cap design. In 1998, the authors will apply sufficient water to determine the failure limit for each cap design. The PCBE should provide reliable knowledge of the performances of the four cap designs under a variety of conditions and aid in making hazardous-waste management decisions at INEL and at disposal sites in similar environments.« less
LeBlanc, Jason J; ElSherif, May; Ye, Lingyun; MacKinnon-Cameron, Donna; Li, Li; Ambrose, Ardith; Hatchette, Todd F; Lang, Amanda L; Gillis, Hayley; Martin, Irene; Andrew, Melissa K; Boivin, Guy; Bowie, William; Green, Karen; Johnstone, Jennie; Loeb, Mark; McCarthy, Anne; McGeer, Allison; Moraca, Sanela; Semret, Makeda; Stiver, Grant; Trottier, Sylvie; Valiquette, Louis; Webster, Duncan; McNeil, Shelly A
2017-06-22
Pneumococcal community acquired pneumonia (CAP Spn ) and invasive pneumococcal disease (IPD) cause significant morbidity and mortality worldwide. Although childhood immunization programs have reduced the overall burden of pneumococcal disease, there is insufficient data in Canada to inform immunization policy in immunocompetent adults. This study aimed to describe clinical outcomes of pneumococcal disease in hospitalized Canadian adults, and determine the proportion of cases caused by vaccine-preventable serotypes. Active surveillance for CAP Spn and IPD in hospitalized adults was performed in hospitals across five Canadian provinces from December 2010 to 2013. CAP Spn were identified using sputum culture, blood culture, a commercial pan-pneumococcal urine antigen detection (UAD), or a serotype-specific UAD. The serotype distribution was characterized using Quellung reaction, and PCR-based serotyping on cultured isolates, or using a 13-valent pneumococcal conjugate vaccine (PCV13) serotype-specific UAD assay. In total, 4769 all-cause CAP cases and 81 cases of IPD (non-CAP) were identified. Of the 4769 all-cause CAP cases, a laboratory test for S. pneumoniae was performed in 3851, identifying 14.3% as CAP Spn . Of CAP cases among whom all four diagnostic test were performed, S. pneumoniae was identified in 23.2% (144/621). CAP Spn cases increased with age and the disease burden of illness was evident in terms of requirement for mechanical ventilation, intensive care unit admission, and 30-day mortality. Of serotypeable CAP Spn or IPD results, predominance for serotypes 3, 7F, 19A, and 22F was observed. The proportion of hospitalized CAP cases caused by a PCV13-type S. pneumoniae ranged between 7.0% and 14.8% among cases with at least one test for S. pneumoniae performed or in whom all four diagnostic tests were performed, respectively. Overall, vaccine-preventable pneumococcal CAP and IPD were shown to be significant causes of morbidity and mortality in hospitalized Canadian adults in the three years following infant PCV13 immunization programs in Canada. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Thinking about touch facilitates tactile but not auditory processing.
Anema, Helen A; de Haan, Alyanne M; Gebuis, Titia; Dijkerman, H Chris
2012-05-01
Mental imagery is considered to be important for normal conscious experience. It is most frequently investigated in the visual, auditory and motor domain (imagination of movement), while the studies on tactile imagery (imagination of touch) are scarce. The current study investigated the effect of tactile and auditory imagery on the left/right discriminations of tactile and auditory stimuli. In line with our hypothesis, we observed that after tactile imagery, tactile stimuli were responded to faster as compared to auditory stimuli and vice versa. On average, tactile stimuli were responded to faster as compared to auditory stimuli, and stimuli in the imagery condition were on average responded to slower as compared to baseline performance (left/right discrimination without imagery assignment). The former is probably due to the spatial and somatotopic proximity of the fingers receiving the taps and the thumbs performing the response (button press), the latter to a dual task cost. Together, these results provide the first evidence of a behavioural effect of a tactile imagery assignment on the perception of real tactile stimuli.
The ability to tap to a beat relates to cognitive, linguistic, and perceptual skills
Tierney, Adam T.; Kraus, Nina
2013-01-01
Reading-impaired children have difficulty tapping to a beat. Here we tested whether this relationship between reading ability and synchronized tapping holds in typically-developing adolescents. We also hypothesized that tapping relates to two other abilities. First, since auditory-motor synchronization requires monitoring of the relationship between motor output and auditory input, we predicted that subjects better able to tap to the beat would perform better on attention tests. Second, since auditory-motor synchronization requires fine temporal precision within the auditory system for the extraction of a sound’s onset time, we predicted that subjects better able to tap to the beat would be less affected by backward masking, a measure of temporal precision within the auditory system. As predicted, tapping performance related to reading, attention, and backward masking. These results motivate future research investigating whether beat synchronization training can improve not only reading ability, but potentially executive function and basic auditory processing as well. PMID:23400117
Options for Auditory Training for Adults with Hearing Loss.
Olson, Anne D
2015-11-01
Hearing aid devices alone do not adequately compensate for sensory losses despite significant technological advances in digital technology. Overall use rates of amplification among adults with hearing loss remain low, and overall satisfaction and performance in noise can be improved. Although improved technology may partially address some listening problems, auditory training may be another alternative to improve speech recognition in noise and satisfaction with devices. The literature underlying auditory plasticity following placement of sensory devices suggests that additional auditory training may be needed for reorganization of the brain to occur. Furthermore, training may be required to acquire optimal performance from devices. Several auditory training programs that are readily accessible for adults with hearing loss, hearing aids, or cochlear implants are described. Programs that can be accessed via Web-based formats and smartphone technology are reviewed. A summary table is provided for easy access to programs with descriptions of features that allow hearing health care providers to assist clients in selecting the most appropriate auditory training program to fit their needs.
Schendzielorz, Philipp; Vollmer, Maike; Rak, Kristen; Wiegner, Armin; Nada, Nashwa; Radeloff, Katrin; Hagen, Rudolf; Radeloff, Andreas
2017-10-01
A cochlear implant (CI) is an electronic prosthesis that can partially restore speech perception capabilities. Optimum information transfer from the cochlea to the central auditory system requires a proper functioning auditory nerve (AN) that is electrically stimulated by the device. In deafness, the lack of neurotrophic support, normally provided by the sensory cells of the inner ear, however, leads to gradual degeneration of auditory neurons with undesirable consequences for CI performance. We evaluated the potential of adipose-derived stromal cells (ASCs) that are known to produce neurotrophic factors to prevent neural degeneration in sensory hearing loss. For this, co-cultures of ASCs with auditory neurons have been studied, and autologous ASC transplantation has been performed in a guinea pig model of gentamicin-induced sensory hearing loss. In vitro ASCs were neuroprotective and considerably increased the neuritogenesis of auditory neurons. In vivo transplantation of ASCs into the scala tympani resulted in an enhanced survival of auditory neurons. Specifically, peripheral AN processes that are assumed to be the optimal activation site for CI stimulation and that are particularly vulnerable to hair cell loss showed a significantly higher survival rate in ASC-treated ears. ASC transplantation into the inner ear may restore neurotrophic support in sensory hearing loss and may help to improve CI performance by enhanced AN survival. Copyright © 2017 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
Bigger Brains or Bigger Nuclei? Regulating the Size of Auditory Structures in Birds
Kubke, M. Fabiana; Massoglia, Dino P.; Carr, Catherine E.
2012-01-01
Increases in the size of the neuronal structures that mediate specific behaviors are believed to be related to enhanced computational performance. It is not clear, however, what developmental and evolutionary mechanisms mediate these changes, nor whether an increase in the size of a given neuronal population is a general mechanism to achieve enhanced computational ability. We addressed the issue of size by analyzing the variation in the relative number of cells of auditory structures in auditory specialists and generalists. We show that bird species with different auditory specializations exhibit variation in the relative size of their hindbrain auditory nuclei. In the barn owl, an auditory specialist, the hind-brain auditory nuclei involved in the computation of sound location show hyperplasia. This hyperplasia was also found in songbirds, but not in non-auditory specialists. The hyperplasia of auditory nuclei was also not seen in birds with large body weight suggesting that the total number of cells is selected for in auditory specialists. In barn owls, differences observed in the relative size of the auditory nuclei might be attributed to modifications in neurogenesis and cell death. Thus, hyperplasia of circuits used for auditory computation accompanies auditory specialization in different orders of birds. PMID:14726625
Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.
Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B
2003-04-01
The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.
Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants
Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.
2012-01-01
The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380
Lamm, K; Lamm, C; Lamm, H; Schumann, K
1989-02-01
Nineteen guinea pigs were exposed to impulse noise from gunfire (G3 of the Federal German Army, 156 dB peak SPL), 6+6 shots or 12+6 shots, with a 3-s pulse interval. For simultaneous measurements of pO2, cochlea microphonics (CM) and compound action potentials of the auditory nerve (CAP), we used the thin 0.5 microns microcoaxial needle electrode described by Baumgaertl and Luebbers, which was placed through the roundwindow membrane into the scala tympani to a depth of 1000 microns. After exposure to the first 6 or 12 gunshots, the pO2 increased by about 20% of the original values in 12 guinea pigs (63%). In the following 30 min of recovery time the pO2 decreased, stabilized or showed a further decline. There were only 3 animals with a pO2 loss of 70% of the original values. Most animals showed a decline of 25% at the end of the recovery period. In all animals after 6 additional shots, the pO2 only decreased by another 5% of the original values. Amplitudes of CM and CAP were reduced by about 40% of the original values after 6 or 12 shots and by another 20%-24% (CM) and 5%-15% (CAP) after 6 additional shots. The intra-arterial blood pressure in the common carotid artery remained constant. The results are discussed with respect to the well-known morphological damage, subsequent ion imbalance and hypoxia within the cortilymph after exposure to gunfire. These changes are reflected in the loss of CMs and CAPs.
Cochlear implant outcomes in children with motor developmental delay.
Amirsalari, Susan; Yousefi, Jaleh; Radfar, Shokofeh; Saburi, Amin; Tavallaie, Seyed Abbas; Hosseini, Mohammad Javad; Noohi, Sima; Hassan Alifard, Mahdieh; Ajallouyean, Mohammad
2012-01-01
Multiple handicapped children and children with syndromes and conditions resulting additional disabilities such as cerebral palsy, global developmental delay and autistic spectrum disorder, are now not routinely precluded from receiving a cochlear implant. The primary focus of this study was to determine the effect of cochlear implants on the speech perception and intelligibility of deaf children with and without motor development delay. In a cohort study, we compared cochlear implant outcomes in two groups of deaf children with or without motor developmental delay (MDD). Among 262 children with pre-lingual profound hearing loss, 28 (10%) had a motor delay based on Gross Motor Function Classification (GMFC). Children with severe motor delays (classification scale levels 4 and 5) and cognitive delays were excluded. All children completed the Categories of Auditory Perception Scales (CAP) and Speech Intelligibility Rating (SIR) prior to surgery and 24 months after the device was activated. The mean age for the study population was 4.09 ± 1.86 years. In all 262 patients the mean CAP score after surgery (5.38 ± 0.043) had a marked difference in comparison with the mean score before surgery (0.482 ± 0.018) (P=0.001). The mean CAP score after surgery for MDD children was 5.03, and was 5.77 for normal motor development children (NMD). The mean SIR score after surgery for MDD children was 2.53, and was 2.66 for NMD children. The final results of CAP and SIR did not have significant difference between NMD children versus MDD children (P>0.05). Regarding to the result, we concluded that children with hearing loss and concomitant MDD as an additional disabilities can benefit from cochlear implantation similar to those of NMD. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Effects of training and motivation on auditory P300 brain-computer interface performance.
Baykara, E; Ruf, C A; Fioravanti, C; Käthner, I; Simon, N; Kleih, S C; Kübler, A; Halder, S
2016-01-01
Brain-computer interface (BCI) technology aims at helping end-users with severe motor paralysis to communicate with their environment without using the natural output pathways of the brain. For end-users in complete paralysis, loss of gaze control may necessitate non-visual BCI systems. The present study investigated the effect of training on performance with an auditory P300 multi-class speller paradigm. For half of the participants, spatial cues were added to the auditory stimuli to see whether performance can be further optimized. The influence of motivation, mood and workload on performance and P300 component was also examined. In five sessions, 16 healthy participants were instructed to spell several words by attending to animal sounds representing the rows and columns of a 5 × 5 letter matrix. 81% of the participants achieved an average online accuracy of ⩾ 70%. From the first to the fifth session information transfer rates increased from 3.72 bits/min to 5.63 bits/min. Motivation significantly influenced P300 amplitude and online ITR. No significant facilitative effect of spatial cues on performance was observed. Training improves performance in an auditory BCI paradigm. Motivation influences performance and P300 amplitude. The described auditory BCI system may help end-users to communicate independently of gaze control with their environment. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Auditory and non-auditory effects of noise on health
Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen
2014-01-01
Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-auditory effects of environmental noise exposure on public health is growing. Observational and experimental studies have shown that noise exposure leads to annoyance, disturbs sleep and causes daytime sleepiness, affects patient outcomes and staff performance in hospitals, increases the occurrence of hypertension and cardiovascular disease, and impairs cognitive performance in schoolchildren. In this Review, we stress the importance of adequate noise prevention and mitigation strategies for public health. PMID:24183105
Integration of auditory and vibrotactile stimuli: Effects of frequency
Wilson, E. Courtenay; Reed, Charlotte M.; Braida, Louis D.
2010-01-01
Perceptual integration of vibrotactile and auditory sinusoidal tone pulses was studied in detection experiments as a function of stimulation frequency. Vibrotactile stimuli were delivered through a single channel vibrator to the left middle fingertip. Auditory stimuli were presented diotically through headphones in a background of 50 dB sound pressure level broadband noise. Detection performance for combined auditory-tactile presentations was measured using stimulus levels that yielded 63% to 77% correct unimodal performance. In Experiment 1, the vibrotactile stimulus was 250 Hz and the auditory stimulus varied between 125 and 2000 Hz. In Experiment 2, the auditory stimulus was 250 Hz and the tactile stimulus varied between 50 and 400 Hz. In Experiment 3, the auditory and tactile stimuli were always equal in frequency and ranged from 50 to 400 Hz. The highest rates of detection for the combined-modality stimulus were obtained when stimulating frequencies in the two modalities were equal or closely spaced (and within the Pacinian range). Combined-modality detection for closely spaced frequencies was generally consistent with an algebraic sum model of perceptual integration; wider-frequency spacings were generally better fit by a Pythagorean sum model. Thus, perceptual integration of auditory and tactile stimuli at near-threshold levels appears to depend both on absolute frequency and relative frequency of stimulation within each modality. PMID:21117754
Liu, Yung-Ching; Jhuang, Jing-Wun
2012-07-01
A driving simulator study was conducted to evaluate the effects of five in-vehicle warning information displays upon drivers' emergent response and decision performance. These displays include visual display, auditory displays with and without spatial compatibility, hybrid displays in both visual and auditory format with and without spatial compatibility. Thirty volunteer drivers were recruited to perform various tasks that involved driving, stimulus-response, divided attention and stress rating. Results show that for displays of single-modality, drivers benefited more when coping with visual display of warning information than auditory display with or without spatial compatibility. However, auditory display with spatial compatibility significantly improved drivers' performance in reacting to the divided attention task and making accurate S-R task decision. Drivers' best performance results were obtained for hybrid display with spatial compatibility. Hybrid displays enabled drivers to respond the fastest and achieve the best accuracy in both S-R and divided attention tasks. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Auditory Detection of the Human Brainstem Auditory Evoked Response.
ERIC Educational Resources Information Center
Kidd, Gerald, Jr.; And Others
1993-01-01
This study evaluated whether listeners can distinguish human brainstem auditory evoked responses elicited by acoustic clicks from control waveforms obtained with no acoustic stimulus when the waveforms are presented auditorily. Detection performance for stimuli presented visually was slightly, but consistently, superior to that which occurred for…
Comparing Monotic and Diotic Selective Auditory Attention Abilities in Children
ERIC Educational Resources Information Center
Cherry, Rochelle; Rubinstein, Adrienne
2006-01-01
Purpose: Some researchers have assessed ear-specific performance of auditory processing ability using speech recognition tasks with normative data based on diotic administration. The present study investigated whether monotic and diotic administrations yield similar results using the Selective Auditory Attention Test. Method: Seventy-two typically…
Auditory Perceptual Abilities Are Associated with Specific Auditory Experience
Zaltz, Yael; Globerson, Eitan; Amir, Noam
2017-01-01
The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination, demonstrating highly specific effects for auditory linguistic experience as well. Overall, results suggest that auditory superiority is associated with the specific auditory exposure. PMID:29238318
2011-01-01
Background Patients with Enterobacter community-acquired pneumonia (EnCAP) were admitted to our intensive care unit (ICU). Our primary aim was to describe them as few data are available on EnCAP. A comparison with CAP due to common and typical bacteria was performed. Methods Baseline clinical, biological and radiographic characteristics, criteria for health-care-associated pneumonia (HCAP) were compared between each case of EnCAP and thirty age-matched typical CAP cases. A univariate and multivariate logistic regression analysis was performed to determine factors independently associated with ENCAP. Their outcome was also compared. Results In comparison with CAP due to common bacteria, a lower leukocytosis and constant HCAP criteria were associated with EnCAP. Empiric antibiotic therapy was less effective in EnCAP (20%) than in typical CAP (97%) (p < 0.01). A delay in the initiation of appropriate antibiotic therapy (3.3 ± 1.6 vs. 1.2 ± 0.6 days; p < 0.01) and an increase in duration of mechanical ventilation (8.4 ± 5.2 vs. 4.0 ± 4.3 days; p = 0.01) and ICU stay were observed in EnCAP patients. Conclusions EnCAP is a severe infection which is more consistent with HCAP than with typical CAP. This retrospectively suggests that the application of HCAP guidelines should have improved EnCAP management. PMID:21569334
Tao, Duoduo; Deng, Rui; Jiang, Ye; Galvin, John J; Fu, Qian-Jie; Chen, Bing
2014-01-01
To investigate how auditory working memory relates to speech perception performance by Mandarin-speaking cochlear implant (CI) users. Auditory working memory and speech perception was measured in Mandarin-speaking CI and normal-hearing (NH) participants. Working memory capacity was measured using forward digit span and backward digit span; working memory efficiency was measured using articulation rate. Speech perception was assessed with: (a) word-in-sentence recognition in quiet, (b) word-in-sentence recognition in speech-shaped steady noise at +5 dB signal-to-noise ratio, (c) Chinese disyllable recognition in quiet, (d) Chinese lexical tone recognition in quiet. Self-reported school rank was also collected regarding performance in schoolwork. There was large inter-subject variability in auditory working memory and speech performance for CI participants. Working memory and speech performance were significantly poorer for CI than for NH participants. All three working memory measures were strongly correlated with each other for both CI and NH participants. Partial correlation analyses were performed on the CI data while controlling for demographic variables. Working memory efficiency was significantly correlated only with sentence recognition in quiet when working memory capacity was partialled out. Working memory capacity was correlated with disyllable recognition and school rank when efficiency was partialled out. There was no correlation between working memory and lexical tone recognition in the present CI participants. Mandarin-speaking CI users experience significant deficits in auditory working memory and speech performance compared with NH listeners. The present data suggest that auditory working memory may contribute to CI users' difficulties in speech understanding. The present pattern of results with Mandarin-speaking CI users is consistent with previous auditory working memory studies with English-speaking CI users, suggesting that the lexical importance of voice pitch cues (albeit poorly coded by the CI) did not influence the relationship between working memory and speech perception.
Chickadees discriminate contingency reversals presented consistently, but not frequently.
McMillan, Neil; Hahn, Allison H; Congdon, Jenna V; Campbell, Kimberley A; Hoang, John; Scully, Erin N; Spetch, Marcia L; Sturdy, Christopher B
2017-07-01
Chickadees are high-metabolism, non-migratory birds, and thus an especially interesting model for studying how animals follow patterns of food availability over time. Here, we studied whether black-capped chickadees (Poecile atricapillus) could learn to reverse their behavior and/or to anticipate changes in reinforcement when the reinforcer contingencies for each stimulus were not stably fixed in time. In Experiment 1, we examined the responses of chickadees on an auditory go/no-go task, with constant reversals in reinforcement contingencies every 120 trials across daily testing intervals. Chickadees did not produce above-chance discrimination; however, when trained with a procedure that only reversed after successful discrimination, chickadees were able to discriminate and reverse their behavior successfully. In Experiment 2, we examined the responses of chickadees when reversals were structured to occur at the same time once per day, and chickadees were again able to discriminate and reverse their behavior over time, though they showed no reliable evidence of reversal anticipation. The frequency of reversals throughout the day thus appears to be an important determinant for these animals' performance in reversal procedures.
Encoding of speech sounds at auditory brainstem level in good and poor hearing aid performers.
Shetty, Hemanth Narayan; Puttabasappa, Manjula
Hearing aids are prescribed to alleviate loss of audibility. It has been reported that about 31% of hearing aid users reject their own hearing aid because of annoyance towards background noise. The reason for dissatisfaction can be located anywhere from the hearing aid microphone till the integrity of neurons along the auditory pathway. To measure spectra from the output of hearing aid at the ear canal level and frequency following response recorded at the auditory brainstem from individuals with hearing impairment. A total of sixty participants having moderate sensorineural hearing impairment with age range from 15 to 65 years were involved. Each participant was classified as either Good or Poor Hearing aid Performers based on acceptable noise level measure. Stimuli /da/ and /si/ were presented through loudspeaker at 65dB SPL. At the ear canal, the spectra were measured in the unaided and aided conditions. At auditory brainstem, frequency following response were recorded to the same stimuli from the participants. Spectrum measured in each condition at ear canal was same in good hearing aid performers and poor hearing aid performers. At brainstem level, better F 0 encoding; F 0 and F 1 energies were significantly higher in good hearing aid performers than in poor hearing aid performers. Though the hearing aid spectra were almost same between good hearing aid performers and poor hearing aid performers, subtle physiological variations exist at the auditory brainstem. The result of the present study suggests that neural encoding of speech sound at the brainstem level might be mediated distinctly in good hearing aid performers from that of poor hearing aid performers. Thus, it can be inferred that subtle physiological changes are evident at the auditory brainstem in a person who is willing to accept noise from those who are not willing to accept noise. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
The Auditory Skills Necessary for Echolocation: A New Explanation.
ERIC Educational Resources Information Center
Carlson-Smith, C.; Wiener, W. R.
1996-01-01
This study employed an audiometric test battery with nine blindfolded undergraduate students to explore success factors in echolocation. Echolocation performance correlated significantly with several specific auditory measures. No relationship was found between high-frequency sensitivity and echolocation performance. (Author/PB)
Nakashima, Ann; Farinaccio, Rocco
2015-04-01
Noise-induced hearing loss resulting from weapon noise exposure has been studied for decades. A summary of recent work in weapon noise signal analysis, current knowledge of hearing damage risk criteria, and auditory performance in impulse noise is presented. Most of the currently used damage risk criteria are based on data that cannot be replicated or verified. There is a need to address the effects of combined noise exposures, from similar or different weapons and continuous background noise, in future noise exposure regulations. Advancements in hearing protection technology have expanded the options available to soldiers. Individual selection of hearing protection devices that are best suited to the type of exposure, the auditory task requirements, and hearing status of the user could help to facilitate their use. However, hearing protection devices affect auditory performance, which in turn affects situational awareness in the field. This includes communication capability and the localization and identification of threats. Laboratory training using high-fidelity weapon noise recordings has the potential to improve the auditory performance of soldiers in the field, providing a low-cost tool to enhance readiness for combat. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.
Bravi, Riccardo; Del Tongo, Claudia; Cohen, Erez James; Dalle Mura, Gabriele; Tognetti, Alessandro; Minciacchi, Diego
2014-06-01
The ability to perform isochronous movements while listening to a rhythmic auditory stimulus requires a flexible process that integrates timing information with movement. Here, we explored how non-temporal and temporal characteristics of an auditory stimulus (presence, interval occupancy, and tempo) affect motor performance. These characteristics were chosen on the basis of their ability to modulate the precision and accuracy of synchronized movements. Subjects have participated in sessions in which they performed sets of repeated isochronous wrist's flexion-extensions under various conditions. The conditions were chosen on the basis of the defined characteristics. Kinematic parameters were evaluated during each session, and temporal parameters were analyzed. In order to study the effects of the auditory stimulus, we have minimized all other sensory information that could interfere with its perception or affect the performance of repeated isochronous movements. The present study shows that the distinct characteristics of an auditory stimulus significantly influence isochronous movements by altering their duration. Results provide evidence for an adaptable control of timing in the audio-motor coupling for isochronous movements. This flexibility would make plausible the use of different encoding strategies to adapt audio-motor coupling for specific tasks.
Startle Auditory Stimuli Enhance the Performance of Fast Dynamic Contractions
Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.
2014-01-01
Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967
Moreno-García, Inmaculada; Delgado-Pardo, Gracia; Roldán-Blasco, Carmen
2015-03-03
This study assesses attention and response control through visual and auditory stimuli in a primary care pediatric sample. The sample consisted of 191 participants aged between 7 and 13 years old. It was divided into 2 groups: (a) 90 children with ADHD, according to diagnostic (DSM-IV-TR) (APA, 2002) and clinical (ADHD Rating Scale-IV) (DuPaul, Power, Anastopoulos, & Reid, 1998) criteria, and (b) 101 children without a history of ADHD. The aims were: (a) to determine and compare the performance of both groups in attention and response control, (b) to identify attention and response control deficits in the ADHD group. Assessments were carried out using the Integrated Visual and Auditory Continuous Performance Test (IVA/CPT, Sandford & Turner, 2002). Results showed that the ADHD group had visual and auditory attention deficits, F(3, 170) = 14.38; p < .01, deficits in fine motor regulation (Welch´s t-test = 44.768; p < .001) and sensory/motor activity (Welch'st-test = 95.683, p < .001; Welch's t-test = 79.537, p < .001). Both groups exhibited a similar performance in response control, F(3, 170) = .93, p = .43.Children with ADHD showed inattention, mental processing speed deficits, and loss of concentration with visual stimuli. Both groups yielded a better performance in attention with auditory stimuli.
Valente, Daniel L.; Braasch, Jonas; Myrbeck, Shane A.
2012-01-01
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene. PMID:22280585
Auditory Spatial Attention Representations in the Human Cerebral Cortex
Kong, Lingqiang; Michalka, Samantha W.; Rosen, Maya L.; Sheremata, Summer L.; Swisher, Jascha D.; Shinn-Cunningham, Barbara G.; Somers, David C.
2014-01-01
Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753
Anderson, Afrouz A; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Dashtestani, Hadis; Chowdhry, Fatima A; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H
2018-01-01
Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing.
Anderson, Afrouz A.; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Chowdhry, Fatima A.; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H.
2018-01-01
Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing. PMID:29870536
Cutanda, Diana; Correa, Ángel; Sanabria, Daniel
2015-06-01
The present study investigated whether participants can develop temporal preparation driven by auditory isochronous rhythms when concurrently performing an auditory working memory (WM) task. In Experiment 1, participants had to respond to an auditory target presented after a regular or an irregular sequence of auditory stimuli while concurrently performing a Sternberg-type WM task. Results showed that participants responded faster after regular compared with irregular rhythms and that this effect was not affected by WM load; however, the lack of a significant main effect of WM load made it difficult to draw any conclusion regarding the influence of the dual-task manipulation in Experiment 1. In order to enhance dual-task interference, Experiment 2 combined the auditory rhythm procedure with an auditory N-Back task, which required WM updating (monitoring and coding of the information) and was presumably more demanding than the mere rehearsal of the WM task used in Experiment 1. Results now clearly showed dual-task interference effects (slower reaction times [RTs] in the high- vs. the low-load condition). However, such interference did not affect temporal preparation induced by rhythms, with faster RTs after regular than after irregular sequences in the high-load and low-load conditions. These results revealed that secondary tasks demanding memory updating, relative to tasks just demanding rehearsal, produced larger interference effects on overall RTs in the auditory rhythm task. Nevertheless, rhythm regularity exerted a strong temporal preparation effect that survived the interference of the WM task even when both tasks competed for processing resources within the auditory modality. (c) 2015 APA, all rights reserved).
Ghai, Shashank; Schmitz, Gerd; Hwang, Tong-Hun; Effenberg, Alfred O.
2018-01-01
The purpose of the study was to assess the influence of real-time auditory feedback on knee proprioception. Thirty healthy participants were randomly allocated to control (n = 15), and experimental group I (15). The participants performed an active knee-repositioning task using their dominant leg, with/without additional real-time auditory feedback where the frequency was mapped in a convergent manner to two different target angles (40 and 75°). Statistical analysis revealed significant enhancement in knee re-positioning accuracy for the constant and absolute error with real-time auditory feedback, within and across the groups. Besides this convergent condition, we established a second divergent condition. Here, a step-wise transposition of frequency was performed to explore whether a systematic tuning between auditory-proprioceptive repositioning exists. No significant effects were identified in this divergent auditory feedback condition. An additional experimental group II (n = 20) was further included. Here, we investigated the influence of a larger magnitude and directional change of step-wise transposition of the frequency. In a first step, results confirm the findings of experiment I. Moreover, significant effects on knee auditory-proprioception repositioning were evident when divergent auditory feedback was applied. During the step-wise transposition participants showed systematic modulation of knee movements in the opposite direction of transposition. We confirm that knee re-positioning accuracy can be enhanced with concurrent application of real-time auditory feedback and that knee re-positioning can modulated in a goal-directed manner with step-wise transposition of frequency. Clinical implications are discussed with respect to joint position sense in rehabilitation settings. PMID:29568259
Jacobsen, Leslie K; Slotkin, Theodore A; Mencl, W Einar; Frost, Stephen J; Pugh, Kenneth R
2007-12-01
Prenatal exposure to active maternal tobacco smoking elevates risk of cognitive and auditory processing deficits, and of smoking in offspring. Recent preclinical work has demonstrated a sex-specific pattern of reduction in cortical cholinergic markers following prenatal, adolescent, or combined prenatal and adolescent exposure to nicotine, the primary psychoactive component of tobacco smoke. Given the importance of cortical cholinergic neurotransmission to attentional function, we examined auditory and visual selective and divided attention in 181 male and female adolescent smokers and nonsmokers with and without prenatal exposure to maternal smoking. Groups did not differ in age, educational attainment, symptoms of inattention, or years of parent education. A subset of 63 subjects also underwent functional magnetic resonance imaging while performing an auditory and visual selective and divided attention task. Among females, exposure to tobacco smoke during prenatal or adolescent development was associated with reductions in auditory and visual attention performance accuracy that were greatest in female smokers with prenatal exposure (combined exposure). Among males, combined exposure was associated with marked deficits in auditory attention, suggesting greater vulnerability of neurocircuitry supporting auditory attention to insult stemming from developmental exposure to tobacco smoke in males. Activation of brain regions that support auditory attention was greater in adolescents with prenatal or adolescent exposure to tobacco smoke relative to adolescents with neither prenatal nor adolescent exposure to tobacco smoke. These findings extend earlier preclinical work and suggest that, in humans, prenatal and adolescent exposure to nicotine exerts gender-specific deleterious effects on auditory and visual attention, with concomitant alterations in the efficiency of neurocircuitry supporting auditory attention.
Auditory processing disorders, verbal disfluency, and learning difficulties: a case study.
Jutras, Benoît; Lagacé, Josée; Lavigne, Annik; Boissonneault, Andrée; Lavoie, Charlen
2007-01-01
This case study reports the findings of auditory behavioral and electrophysiological measures performed on a graduate student (identified as LN) presenting verbal disfluency and learning difficulties. Results of behavioral audiological testing documented the presence of auditory processing disorders, particularly temporal processing and binaural integration. Electrophysiological test results, including middle latency, late latency and cognitive potentials, revealed that LN's central auditory system processes acoustic stimuli differently to a reference group with normal hearing.
Lidestam, Björn; Moradi, Shahram; Pettersson, Rasmus; Ricklefs, Theodor
2014-08-01
The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identification was measured before and after the training (or control activity). Results showed that only audiovisual training improved speech-in-noise identification, demonstrating superiority over auditory-only training.
A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion
Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon
2012-01-01
The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322
Albouy, Philippe; Cousineau, Marion; Caclin, Anne; Tillmann, Barbara; Peretz, Isabelle
2016-01-06
Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia or specific language impairment might be a low-level sensory dysfunction. In the present study we test this hypothesis in congenital amusia, a neurodevelopmental disorder characterized by severe deficits in the processing of pitch-based material. We manipulated the temporal characteristics of auditory stimuli and investigated the influence of the time given to encode pitch information on participants' performance in discrimination and short-term memory. Our results show that amusics' performance in such tasks scales with the duration available to encode acoustic information. This suggests that in auditory neuro-developmental disorders, abnormalities in early steps of the auditory processing can underlie the high-level deficits (here musical disabilities). Observing that the slowing down of temporal dynamics improves amusics' pitch abilities allows considering this approach as a potential tool for remediation in developmental auditory disorders.
Higher dietary diversity is related to better visual and auditory sustained attention.
Shiraseb, Farideh; Siassi, Fereydoun; Qorbani, Mostafa; Sotoudeh, Gity; Rostami, Reza; Narmaki, Elham; Yavari, Parvaneh; Aghasi, Mohadeseh; Shaibu, Osman Mohammed
2016-04-01
Attention is a complex cognitive function that is necessary for learning, for following social norms of behaviour and for effective performance of responsibilities and duties. It is especially important in sensitive occupations requiring sustained attention. Improvement of dietary diversity (DD) is recognised as an important factor in health promotion, but its association with sustained attention is unknown. The aim of this study was to determine the association between auditory and visual sustained attention and DD. A cross-sectional study was carried out on 400 women aged 20-50 years who attended sports clubs at Tehran Municipality. Sustained attention was evaluated on the basis of the Integrated Visual and Auditory Continuous Performance Test using Integrated Visual and Auditory software. A single 24-h dietary recall questionnaire was used for DD assessment. Dietary diversity scores (DDS) were determined using the FAO guidelines. The mean visual and auditory sustained attention scores were 40·2 (sd 35·2) and 42·5 (sd 38), respectively. The mean DDS was 4·7 (sd 1·5). After adjusting for age, education years, physical activity, energy intake and BMI, mean visual and auditory sustained attention showed a significant increase as the quartiles of DDS increased (P=0·001). In addition, the mean subscales of attention, including auditory consistency and vigilance, visual persistence, visual and auditory focus, speed, comprehension and full attention, increased significantly with increasing DDS (P<0·05). In conclusion, higher DDS is associated with better visual and auditory sustained attention.
Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.
2016-01-01
Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630
Torppa, Ritva; Faulkner, Andrew; Huotilainen, Minna; Järvikivi, Juhani; Lipsanen, Jari; Laasonen, Marja; Vainio, Martti
2014-03-01
To study prosodic perception in early-implanted children in relation to auditory discrimination, auditory working memory, and exposure to music. Word and sentence stress perception, discrimination of fundamental frequency (F0), intensity and duration, and forward digit span were measured twice over approximately 16 months. Musical activities were assessed by questionnaire. Twenty-one early-implanted and age-matched normal-hearing (NH) children (4-13 years). Children with cochlear implants (CIs) exposed to music performed better than others in stress perception and F0 discrimination. Only this subgroup of implanted children improved with age in word stress perception, intensity discrimination, and improved over time in digit span. Prosodic perception, F0 discrimination and forward digit span in implanted children exposed to music was equivalent to the NH group, but other implanted children performed more poorly. For children with CIs, word stress perception was linked to digit span and intensity discrimination: sentence stress perception was additionally linked to F0 discrimination. Prosodic perception in children with CIs is linked to auditory working memory and aspects of auditory discrimination. Engagement in music was linked to better performance across a range of measures, suggesting that music is a valuable tool in the rehabilitation of implanted children.
Aizenberg, Mark; Mwilambwe-Tshilobo, Laetitia; Briguglio, John J.; Natan, Ryan G.; Geffen, Maria N.
2015-01-01
The ability to discriminate tones of different frequencies is fundamentally important for everyday hearing. While neurons in the primary auditory cortex (AC) respond differentially to tones of different frequencies, whether and how AC regulates auditory behaviors that rely on frequency discrimination remains poorly understood. Here, we find that the level of activity of inhibitory neurons in AC controls frequency specificity in innate and learned auditory behaviors that rely on frequency discrimination. Photoactivation of parvalbumin-positive interneurons (PVs) improved the ability of the mouse to detect a shift in tone frequency, whereas photosuppression of PVs impaired the performance. Furthermore, photosuppression of PVs during discriminative auditory fear conditioning increased generalization of conditioned response across tone frequencies, whereas PV photoactivation preserved normal specificity of learning. The observed changes in behavioral performance were correlated with bidirectional changes in the magnitude of tone-evoked responses, consistent with predictions of a model of a coupled excitatory-inhibitory cortical network. Direct photoactivation of excitatory neurons, which did not change tone-evoked response magnitude, did not affect behavioral performance in either task. Our results identify a new function for inhibition in the auditory cortex, demonstrating that it can improve or impair acuity of innate and learned auditory behaviors that rely on frequency discrimination. PMID:26629746
Auditory and Visual Sustained Attention in Children with Speech Sound Disorder
Murphy, Cristina F. B.; Pagan-Neves, Luciana O.; Wertzner, Haydée F.; Schochat, Eliane
2014-01-01
Although research has demonstrated that children with specific language impairment (SLI) and reading disorder (RD) exhibit sustained attention deficits, no study has investigated sustained attention in children with speech sound disorder (SSD). Given the overlap of symptoms, such as phonological memory deficits, between these different language disorders (i.e., SLI, SSD and RD) and the relationships between working memory, attention and language processing, it is worthwhile to investigate whether deficits in sustained attention also occur in children with SSD. A total of 55 children (18 diagnosed with SSD (8.11±1.231) and 37 typically developing children (8.76±1.461)) were invited to participate in this study. Auditory and visual sustained-attention tasks were applied. Children with SSD performed worse on these tasks; they committed a greater number of auditory false alarms and exhibited a significant decline in performance over the course of the auditory detection task. The extent to which performance is related to auditory perceptual difficulties and probable working memory deficits is discussed. Further studies are needed to better understand the specific nature of these deficits and their clinical implications. PMID:24675815
Non-visual spatial tasks reveal increased interactions with stance postural control.
Woollacott, Marjorie; Vander Velde, Timothy
2008-05-07
The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.
Relationship between Auditory and Cognitive Abilities in Older Adults
Sheft, Stanley
2015-01-01
Objective The objective was to evaluate the association of peripheral and central hearing abilities with cognitive function in older adults. Methods Recruited from epidemiological studies of aging and cognition at the Rush Alzheimer’s Disease Center, participants were a community-dwelling cohort of older adults (range 63–98 years) without diagnosis of dementia. The cohort contained roughly equal numbers of Black (n=61) and White (n=63) subjects with groups similar in terms of age, gender, and years of education. Auditory abilities were measured with pure-tone audiometry, speech-in-noise perception, and discrimination thresholds for both static and dynamic spectral patterns. Cognitive performance was evaluated with a 12-test battery assessing episodic, semantic, and working memory, perceptual speed, and visuospatial abilities. Results Among the auditory measures, only the static and dynamic spectral-pattern discrimination thresholds were associated with cognitive performance in a regression model that included the demographic covariates race, age, gender, and years of education. Subsequent analysis indicated substantial shared variance among the covariates race and both measures of spectral-pattern discrimination in accounting for cognitive performance. Among cognitive measures, working memory and visuospatial abilities showed the strongest interrelationship to spectral-pattern discrimination performance. Conclusions For a cohort of older adults without diagnosis of dementia, neither hearing thresholds nor speech-in-noise ability showed significant association with a summary measure of global cognition. In contrast, the two auditory metrics of spectral-pattern discrimination ability significantly contributed to a regression model prediction of cognitive performance, demonstrating association of central auditory ability to cognitive status using auditory metrics that avoided the confounding effect of speech materials. PMID:26237423
Shared and distinct factors driving attention and temporal processing across modalities
Berry, Anne S.; Li, Xu; Lin, Ziyong; Lustig, Cindy
2013-01-01
In addition to the classic finding that “sounds are judged longer than lights,” the timing of auditory stimuli is often more precise and accurate than is the timing of visual stimuli. In cognitive models of temporal processing, these modality differences are explained by positing that auditory stimuli more automatically capture and hold attention, more efficiently closing an attentional switch that allows the accumulation of pulses marking the passage of time (Block & Zakay, 1997; Meck, 1991; Penney, 2003). However, attention is a multifaceted construct, and there has been little attempt to determine which aspects of attention may be related to modality effects. We used visual and auditory versions of the Continuous Temporal Expectancy Task (CTET; O'Connell et al., 2009) a timing task previously linked to behavioral and electrophysiological measures of mind-wandering and attention lapses, and tested participants with or without the presence of a video distractor. Performance in the auditory condition was generally superior to that in the visual condition, replicating standard results in the timing literature. The auditory modality was also less affected by declines in sustained attention indexed by declines in performance over time. In contrast, distraction had an equivalent impact on performance in the two modalities. Analysis of individual differences in performance revealed further differences between the two modalities: Poor performance in the auditory condition was primarily related to boredom whereas poor performance in the visual condition was primarily related to distractibility. These results suggest that: 1) challenges to different aspects of attention reveal both modality-specific and nonspecific effects on temporal processing, and 2) different factors drive individual differences when testing across modalities. PMID:23978664
Liang, Maojin; Chen, Yuebo; Zhao, Fei; Zhang, Junpeng; Liu, Jiahao; Zhang, Xueyuan; Cai, Yuexin; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing
2017-09-01
Although visual processing recruitment of the auditory cortices has been reported previously in prelingually deaf children who have a rapidly developing brain and no auditory processing, the visual processing recruitment of auditory cortices might be different in processing different visual stimuli and may affect cochlear implant (CI) outcomes. Ten prelingually deaf children, 4 to 6 years old, were recruited for the study. Twenty prelingually deaf subjects, 4 to 6 years old with CIs for 1 year, were also recruited; 10 with well-performing CIs, 10 with poorly performing CIs. Ten age and sex-matched normal-hearing children were recruited as controls. Visual ("sound" photo [photograph with imaginative sound] and "nonsound" photo [photograph without imaginative sound]) evoked potentials were measured in all subjects. P1 at Oz and N1 at the bilateral temporal-frontal areas (FC3 and FC4) were compared. N1 amplitudes were strongest in the deaf children, followed by those with poorly performing CIs, controls and those with well-performing CIs. There was no significant difference between controls and those with well-performing CIs. "Sound" photo stimuli evoked a stronger N1 than "nonsound" photo stimuli. Further analysis showed that only at FC4 in deaf subjects and those with poorly performing CIs were the N1 responses to "sound" photo stimuli stronger than those to "nonsound" photo stimuli. No significant difference was found for the FC3 and FC4 areas. No significant difference was found in N1 latencies and P1 amplitudes or latencies. The results indicate enhanced visual recruitment of the auditory cortices in prelingually deaf children. Additionally, the decrement in visual recruitment of auditory cortices was related to good CI outcomes.
Kim, Jin-Seop; Oh, Duck-Won; Kim, Suhn-Yeop; Choi, Jong-Duk
2011-02-01
To compare the effect of visual and kinesthetic locomotor imagery training on walking performance and to determine the clinical feasibility of incorporating auditory step rhythm into the training. Randomized crossover trial. Laboratory of a Department of Physical Therapy. Fifteen subjects with post-stroke hemiparesis. Four locomotor imagery trainings on walking performance: visual locomotor imagery training, kinesthetic locomotor imagery training, visual locomotor imagery training with auditory step rhythm and kinesthetic locomotor imagery training with auditory step rhythm. The timed up-and-go test and electromyographic and kinematic analyses of the affected lower limb during one gait cycle. After the interventions, significant differences were found in the timed up-and-go test results between the visual locomotor imagery training (25.69 ± 16.16 to 23.97 ± 14.30) and the kinesthetic locomotor imagery training with auditory step rhythm (22.68 ± 12.35 to 15.77 ± 8.58) (P < 0.05). During the swing and stance phases, the kinesthetic locomotor imagery training exhibited significantly increased activation in a greater number of muscles and increased angular displacement of the knee and ankle joints compared with the visual locomotor imagery training, and these effects were more prominent when auditory step rhythm was integrated into each form of locomotor imagery training. The activation of the hamstring during the swing phase and the gastrocnemius during the stance phase, as well as kinematic data of the knee joint, were significantly different for posttest values between the visual locomotor imagery training and the kinesthetic locomotor imagery training with auditory step rhythm (P < 0.05). The therapeutic effect may be further enhanced in the kinesthetic locomotor imagery training than in the visual locomotor imagery training. The auditory step rhythm together with the locomotor imagery training produces a greater positive effect in improving the walking performance of patients with post-stroke hemiparesis.
Umat, Cila; Mukari, Siti Z; Ezan, Nurul F; Din, Normah C
2011-08-01
To examine the changes in the short-term auditory memory following the use of frequency-modulated (FM) system in children with suspected auditory processing disorders (APDs), and also to compare the advantages of bilateral over unilateral FM fitting. This longitudinal study involved 53 children from Sekolah Kebangsaan Jalan Kuantan 2, Kuala Lumpur, Malaysia who fulfilled the inclusion criteria. The study was conducted from September 2007 to October 2008 in the Department of Audiology and Speech Sciences, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia. The children's age was between 7-10 years old, and they were assigned into 3 groups: 15 in the control group (not fitted with FM); 19 in the unilateral; and 19 in the bilateral FM-fitting group. Subjects wore the FM system during school time for 12 weeks. Their working memory (WM), best learning (BL), and retention of information (ROI) were measured using the Rey Auditory Verbal Learning Test at pre-fitting, post (after 12 weeks of FM usage), and at long term (one year after the usage of FM system ended). There were significant differences in the mean WM (p=0.001), BL (p=0.019), and ROI (p=0.005) scores at the different measurement times, in which the mean scores at long-term were consistently higher than at pre-fitting, despite similar performances at the baseline (p>0.05). There was no significant difference in performance between unilateral- and bilateral-fitting groups. The use of FM might give a long-term effect on improving selected short-term auditory memories of some children with suspected APDs. One may not need to use 2 FM receivers to receive advantages on auditory memory performance.
Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram
2011-01-01
Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of different excitatory and inhibitory mechanisms and to distinct spatiotemporal metrics of map activation to represent a sound. The described non-auditory firing and modulations of auditory responses suggest that auditory cortex, by collecting all necessary information, functions as a "semantic processor" deducing the task-specific meaning of sounds by learning. © 2010. Published by Elsevier B.V.
Auditory training improves auditory performance in cochlear implanted children.
Roman, Stephane; Rochette, Françoise; Triglia, Jean-Michel; Schön, Daniele; Bigand, Emmanuel
2016-07-01
While the positive benefits of pediatric cochlear implantation on language perception skills are now proven, the heterogeneity of outcomes remains high. The understanding of this heterogeneity and possible strategies to minimize it is of utmost importance. Our scope here is to test the effects of an auditory training strategy, "sound in Hands", using playful tasks grounded on the theoretical and empirical findings of cognitive sciences. Indeed, several basic auditory operations, such as auditory scene analysis (ASA) are not trained in the usual therapeutic interventions in deaf children. However, as they constitute a fundamental basis in auditory cognition, their development should imply general benefit in auditory processing and in turn enhance speech perception. The purpose of the present study was to determine whether cochlear implanted children could improve auditory performances in trained tasks and whether they could develop a transfer of learning to a phonetic discrimination test. Nineteen prelingually unilateral cochlear implanted children without additional handicap (4-10 year-olds) were recruited. The four main auditory cognitive processing (identification, discrimination, ASA and auditory memory) were stimulated and trained in the Experimental Group (EG) using Sound in Hands. The EG followed 20 training weekly sessions of 30 min and the untrained group was the control group (CG). Two measures were taken for both groups: before training (T1) and after training (T2). EG showed a significant improvement in the identification, discrimination and auditory memory tasks. The improvement in the ASA task did not reach significance. CG did not show any significant improvement in any of the tasks assessed. Most importantly, improvement was visible in the phonetic discrimination test for EG only. Moreover, younger children benefited more from the auditory training program to develop their phonetic abilities compared to older children, supporting the idea that rehabilitative care is most efficient when it takes place early on during childhood. These results are important to pinpoint the auditory deficits in CI children, to gather a better understanding of the links between basic auditory skills and speech perception which will in turn allow more efficient rehabilitative programs. Copyright © 2016 Elsevier B.V. All rights reserved.
Conrad, Claudius; Konuk, Yusuf; Werner, Paul D.; Cao, Caroline G.; Warshaw, Andrew L.; Rattner, David W.; Stangenberg, Lars; Ott, Harald C.; Jones, Daniel B.; Miller, Diane L; Gee, Denise W.
2012-01-01
OBJECTIVE To explore how the two most important components of surgical performance - speed and accuracy - are influenced by different forms of stress and what the impact of music on these factors is. SUMMARY BACKGROUND DATA Based on a recently published pilot study on surgical experts, we designed an experiment examining the effects of auditory stress, mental stress, and music on surgical performance and learning, and then correlated the data psychometric measures to the role of music in a novice surgeon’s life. METHODS 31 surgeons were recruited for a crossover study. Surgeons were randomized to four simple standardized tasks to be performed on the Surgical SIM VR laparoscopic simulator, allowing exact tracking of speed and accuracy. Tasks were performed under a variety of conditions, including silence, dichotic music (auditory stress), defined classical music (auditory relaxation), and mental loading (mental arithmetic tasks). Tasks were performed twice to test for memory consolidation and to accommodate for baseline variability. Performance was correlated to the Brief Musical Experience Questionnaire (MEQ). RESULTS Mental loading influences performance with respect to accuracy, speed, and recall more negatively than does auditory stress. Defined classical music might lead to minimally worse performance initially, but leads to significantly improved memory consolidation. Furthermore, psychologic testing of the volunteers suggests that surgeons with greater musical commitment, measured by the MEQ, perform worse under the mental loading condition. CONCLUSION Mental distraction and auditory stress negatively affect specific components of surgical learning and performance. If used appropriately, classical music may positively affect surgical memory consolidation. It also may be possible to predict surgeons’ performance and learning under stress through psychological tests on the role of music in a surgeon’s life. Further investigation is necessary to determine the cognitive processes behind these correlations. PMID:22584632
Investigating the role of visual and auditory search in reading and developmental dyslexia
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a “serial” search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d′) strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in “serial” search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills. PMID:24093014
Investigating the role of visual and auditory search in reading and developmental dyslexia.
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a "serial" search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d') strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in "serial" search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills.
Bratakos, M S; Reed, C M; Delhorne, L A; Denesvich, G
2001-06-01
The objective of this study was to compare the effects of a single-band envelope cue as a supplement to speechreading of segmentals and sentences when presented through either the auditory or tactual modality. The supplementary signal, which consisted of a 200-Hz carrier amplitude-modulated by the envelope of an octave band of speech centered at 500 Hz, was presented through a high-performance single-channel vibrator for tactual stimulation or through headphones for auditory stimulation. Normal-hearing subjects were trained and tested on the identification of a set of 16 medial vowels in /b/-V-/d/ context and a set of 24 initial consonants in C-/a/-C context under five conditions: speechreading alone (S), auditory supplement alone (A), tactual supplement alone (T), speechreading combined with the auditory supplement (S+A), and speechreading combined with the tactual supplement (S+T). Performance on various speech features was examined to determine the contribution of different features toward improvements under the aided conditions for each modality. Performance on the combined conditions (S+A and S+T) was compared with predictions generated from a quantitative model of multi-modal performance. To explore the relationship between benefits for segmentals and for connected speech within the same subjects, sentence reception was also examined for the three conditions of S, S+A, and S+T. For segmentals, performance generally followed the pattern of T < A < S < S+T < S+A. Significant improvements to speechreading were observed with both the tactual and auditory supplements for consonants (10 and 23 percentage-point improvements, respectively), but only with the auditory supplement for vowels (a 10 percentage-point improvement). The results of the feature analyses indicated that improvements to speechreading arose primarily from improved performance on the features low and tense for vowels and on the features voicing, nasality, and plosion for consonants. These improvements were greater for auditory relative to tactual presentation. When predicted percent-correct scores for the multi-modal conditions were compared with observed scores, the predicted values always exceeded observed values and the predictions were somewhat more accurate for the S+A than for the S+T conditions. For sentences, significant improvements to speechreading were observed with both the auditory and tactual supplements for high-context materials but again only with the auditory supplement for low-context materials. The tactual supplement provided a relative gain to speechreading of roughly 25% for all materials except low-context sentences (where gain was only 10%), whereas the auditory supplement provided relative gains of roughly 50% (for vowels, consonants, and low-context sentences) to 75% (for high-context sentences). The envelope cue provides a significant benefit to the speechreading of consonant segments when presented through either the auditory or tactual modality and of vowel segments through audition only. These benefits were found to be related to the reception of the same types of features under both modalities (voicing, manner, and plosion for consonants and low and tense for vowels); however, benefits were larger for auditory compared with tactual presentation. The benefits observed for segmentals appear to carry over into benefits for sentence reception under both modalities.
Pérez, Miguel Ángel; Pérez-Valenzuela, Catherine; Rojas-Thomas, Felipe; Ahumada, Juan; Fuenzalida, Marco; Dagnino-Subiabre, Alexies
2013-08-29
Chronic stress induces dendritic atrophy in the rat primary auditory cortex (A1), a key brain area for auditory attention. The aim of this study was to determine whether repeated restraint stress affects auditory attention and synaptic transmission in A1. Male Sprague-Dawley rats were trained in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance over 80% of correct trials in the 2-ACT were randomly assigned to control and restraint stress experimental groups. To analyze the effects of restraint stress on the auditory attention, trained rats of both groups were subjected to 50 2-ACT trials one day before and one day after of the stress period. A difference score was determined by subtracting the number of correct trials after from those before the stress protocol. Another set of rats was used to study the synaptic transmission in A1. Restraint stress decreased the number of correct trials by 28% compared to the performance of control animals (p < 0.001). Furthermore, stress reduced the frequency of spontaneous inhibitory postsynaptic currents (sIPSC) and miniature IPSC in A1, whereas glutamatergic efficacy was not affected. Our results demonstrate that restraint stress decreased auditory attention and GABAergic synaptic efficacy in A1. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.
Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva
2016-01-01
Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in various configurations of masking speech, and in which the target speakers and test materials were unrelated to the training materials; (2) the Children's Auditory Performance Scale that assesses listening skills, completed by the children's teachers; and (3) the Clinical Evaluation of Language Fundamental-4 pragmatic profile that assesses pragmatic language use, completed by parents. All outcome measures significantly improved at immediate postintervention in the intervention group only, with effect sizes ranging from 0.76 to 1.7. Improvements in speech-in-noise performance correlated with improved scores in the Children's Auditory Performance Scale questionnaire in the trained group only. Baseline language and cognitive assessments did not predict better training outcome. Improvements in speech-in-noise performance were sustained 3 months postintervention. Broad speech-based auditory training led to improved auditory processing skills as reflected in speech-in-noise test performance and in better functional listening in real life. The observed correlation between improved functional listening with improved speech-in-noise perception in the trained group suggests that improved listening was a direct generalization of the auditory training.
Directional Effects between Rapid Auditory Processing and Phonological Awareness in Children
ERIC Educational Resources Information Center
Johnson, Erin Phinney; Pennington, Bruce F.; Lee, Nancy Raitano; Boada, Richard
2009-01-01
Background: Deficient rapid auditory processing (RAP) has been associated with early language impairment and dyslexia. Using an auditory masking paradigm, children with language disabilities perform selectively worse than controls at detecting a tone in a backward masking (BM) condition (tone followed by white noise) compared to a forward masking…
Electrophysiological Evidence for the Sources of the Masking Level Difference
ERIC Educational Resources Information Center
Fowler, Cynthia G.
2017-01-01
Purpose: The purpose of this review article is to review evidence from auditory evoked potential studies to describe the contributions of the auditory brainstem and cortex to the generation of the masking level difference (MLD). Method: A literature review was performed, focusing on the auditory brainstem, middle, and late latency responses used…
ERIC Educational Resources Information Center
Aleman, Cheryl; And Others
1990-01-01
Compares auditory/visual practice to visual/motor practice in spelling with seven elementary school learning-disabled students enrolled in a resource room setting. Finds that the auditory/visual practice was superior to the visual/motor practice on the weekly spelling performance for all seven students. (MG)
Precise auditory-vocal mirroring in neurons for learned vocal communication.
Prather, J F; Peters, S; Nowicki, S; Mooney, R
2008-01-17
Brain mechanisms for communication must establish a correspondence between sensory and motor codes used to represent the signal. One idea is that this correspondence is established at the level of single neurons that are active when the individual performs a particular gesture or observes a similar gesture performed by another individual. Although neurons that display a precise auditory-vocal correspondence could facilitate vocal communication, they have yet to be identified. Here we report that a certain class of neurons in the swamp sparrow forebrain displays a precise auditory-vocal correspondence. We show that these neurons respond in a temporally precise fashion to auditory presentation of certain note sequences in this songbird's repertoire and to similar note sequences in other birds' songs. These neurons display nearly identical patterns of activity when the bird sings the same sequence, and disrupting auditory feedback does not alter this singing-related activity, indicating it is motor in nature. Furthermore, these neurons innervate striatal structures important for song learning, raising the possibility that singing-related activity in these cells is compared to auditory feedback to guide vocal learning.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
Sullivan, Jessica R; Osman, Homira; Schafer, Erin C
2015-06-01
The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Children with normal hearing between the ages of 8 and 10 years were administered working memory and comprehension tasks in quiet and noise. The comprehension measure comprised 5 domains: main idea, details, reasoning, vocabulary, and understanding messages. Performance on auditory working memory and comprehension tasks were significantly poorer in noise than in quiet. The reasoning, details, understanding, and vocabulary subtests were particularly affected in noise (p < .05). The relationship between auditory working memory and comprehension was stronger in noise than in quiet, suggesting an increased contribution of working memory. These data suggest that school-age children's auditory working memory and comprehension are negatively affected by noise. Performance on comprehension tasks in noise is strongly related to demands placed on working memory, supporting the theory that degrading listening conditions draws resources away from the primary task.
Temporal processing and long-latency auditory evoked potential in stutterers.
Prestes, Raquel; de Andrade, Adriana Neves; Santos, Renata Beatriz Fernandes; Marangoni, Andrea Tortosa; Schiefer, Ana Maria; Gil, Daniela
Stuttering is a speech fluency disorder, and may be associated with neuroaudiological factors linked to central auditory processing, including changes in auditory processing skills and temporal resolution. To characterize the temporal processing and long-latency auditory evoked potential in stutterers and to compare them with non-stutterers. The study included 41 right-handed subjects, aged 18-46 years, divided into two groups: stutterers (n=20) and non-stutters (n=21), compared according to age, education, and sex. All subjects were submitted to the duration pattern tests, random gap detection test, and long-latency auditory evoked potential. Individuals who stutter showed poorer performance on Duration Pattern and Random Gap Detection tests when compared with fluent individuals. In the long-latency auditory evoked potential, there was a difference in the latency of N2 and P3 components; stutterers had higher latency values. Stutterers have poor performance in temporal processing and higher latency values for N2 and P3 components. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Designing Smart Charter School Caps
ERIC Educational Resources Information Center
Dillon, Erin
2010-01-01
In 2007, Andrew J. Rotherham proposed a new approach to the contentious issue of charter school caps, the statutory limits on charter school growth in place in several states. Rotherham's proposal, termed "smart charter school caps," called for quality sensitive caps that allow the expansion of high-performing charter schools while also…
Development of the auditory system
Litovsky, Ruth
2015-01-01
Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262
Noise levels from toys and recreational articles for children and teenagers.
Hellstrom, P A; Dengerink, H A; Axelsson, A
1992-10-01
This study examined the noise level emitted by toys and recreational articles used by children and teenagers. The results indicate that many of the items tested emit sufficiently intense noise to be a source of noise induced hearing loss in school-age children. While the baby toys provided noise exposure within the limits of national regulations, they are most intense in a frequency range that corresponds to the resonance frequency of the external auditory canal of very young children. Hobby motors emit noise that may require protection depending upon the length of use. Fire-crackers and cap guns emit impulse noises that exceed even conservative standards for noise exposure.
Children's Auditory Working Memory Performance in Degraded Listening Conditions
ERIC Educational Resources Information Center
Osman, Homira; Sullivan, Jessica R.
2014-01-01
Purpose: The objectives of this study were to determine (a) whether school-age children with typical hearing demonstrate poorer auditory working memory performance in multitalker babble at degraded signal-to-noise ratios than in quiet; and (b) whether the amount of cognitive demand of the task contributed to differences in performance in noise. It…
Text as a Supplement to Speech in Young and Older Adults a)
Krull, Vidya; Humes, Larry E.
2015-01-01
Objective The purpose of this experiment was to quantify the contribution of visual text to auditory speech recognition in background noise. Specifically, we tested the hypothesis that partially accurate visual text from an automatic speech recognizer could be used successfully to supplement speech understanding in difficult listening conditions in older adults, with normal or impaired hearing. Our working hypotheses were based on what is known regarding audiovisual speech perception in the elderly from speechreading literature. We hypothesized that: 1) combining auditory and visual text information will result in improved recognition accuracy compared to auditory or visual text information alone; 2) benefit from supplementing speech with visual text (auditory and visual enhancement) in young adults will be greater than that in older adults; and 3) individual differences in performance on perceptual measures would be associated with cognitive abilities. Design Fifteen young adults with normal hearing, fifteen older adults with normal hearing, and fifteen older adults with hearing loss participated in this study. All participants completed sentence recognition tasks in auditory-only, text-only, and combined auditory-text conditions. The auditory sentence stimuli were spectrally shaped to restore audibility for the older participants with impaired hearing. All participants also completed various cognitive measures, including measures of working memory, processing speed, verbal comprehension, perceptual and cognitive speed, processing efficiency, inhibition, and the ability to form wholes from parts. Group effects were examined for each of the perceptual and cognitive measures. Audiovisual benefit was calculated relative to performance on auditory-only and visual-text only conditions. Finally, the relationship between perceptual measures and other independent measures were examined using principal-component factor analyses, followed by regression analyses. Results Both young and older adults performed similarly on nine out of ten perceptual measures (auditory, visual, and combined measures). Combining degraded speech with partially correct text from an automatic speech recognizer improved the understanding of speech in both young and older adults, relative to both auditory- and text-only performance. In all subjects, cognition emerged as a key predictor for a general speech-text integration ability. Conclusions These results suggest that neither age nor hearing loss affected the ability of subjects to benefit from text when used to support speech, after ensuring audibility through spectral shaping. These results also suggest that the benefit obtained by supplementing auditory input with partially accurate text is modulated by cognitive ability, specifically lexical and verbal skills. PMID:26458131
Bellis, Teri James; Billiet, Cassie; Ross, Jody
2011-09-01
Cacace and McFarland (2005) have suggested that the addition of cross-modal analogs will improve the diagnostic specificity of (C)APD (central auditory processing disorder) by ensuring that deficits observed are due to the auditory nature of the stimulus and not to supra-modal or other confounds. Others (e.g., Musiek et al, 2005) have expressed concern about the use of such analogs in diagnosing (C)APD given the uncertainty as to the degree to which cross-modal measures truly are analogous and emphasize the nonmodularity of the CANs (central auditory nervous system) and its function, which precludes modality specificity of (C)APD. To date, no studies have examined the clinical utility of cross-modal (e.g., visual) analogs of central auditory tests in the differential diagnosis of (C)APD. This study investigated performance of children diagnosed with (C)APD, children diagnosed with ADHD (attention deficit hyperactivity disorder), and typically developing children on three diagnostic tests of central auditory function and their corresponding visual analogs. The study sought to determine whether deficits observed in the (C)APD group were restricted to the auditory modality and the degree to which the addition of visual analogs aids in the ability to differentiate among groups. An experimental repeated measures design was employed. Participants consisted of three groups of right-handed children (normal control, n=10; ADHD, n=10; (C)APD, n=7) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of disorders unrelated to their primary diagnosis. Participants in Groups 2 and 3 met current diagnostic criteria for ADHD and (C)APD. Visual analogs of three tests in common clinical use for the diagnosis of (C)APD were used (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; and Duration Patterns [Pinheiro and Musiek, 1985]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANCOVAs (analyses of covariance) were used to examine effects of group, modality, and laterality (Dichotic/Dichoptic Digits) or response condition (auditory and visual patterning). In addition, planned univariate ANCOVAs were used to examine effects of group on intratest comparison measures (REA, HLD [Humming-Labeling Differential]). Children with both ADHD and (C)APD performed more poorly overall than typically developing children on all tasks, with the (C)APD group exhibiting the poorest performance on the auditory and visual patterns tests but the ADHD and (C)APD group performing similarly on the Dichotic/Dichoptic Digits task. However, each of the auditory and visual intratest comparison measures, when taken individually, was able to distinguish the (C)APD group from both the normal control and ADHD groups, whose performance did not differ from one another. Results underscore the importance of intratest comparison measures in the interpretation of central auditory tests (American Speech-Language-Hearing Association [ASHA], 2005 ; American Academy of Audiology [AAA], 2010). Results also support the "non-modular" view of (C)APD in which cross-modal deficits would be predicted based on shared neuroanatomical substrates. Finally, this study demonstrates that auditory tests alone are sufficient to distinguish (C)APD from supra-modal disorders, with cross-modal analogs adding little if anything to the differential diagnostic process. American Academy of Audiology.
Henshaw, Helen; Ferguson, Melanie A.
2013-01-01
Background Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss. Objective This systematic review (PROSPERO 2011: CRD42011001406) evaluated the published evidence-base for the efficacy of individual computer-based auditory training to improve speech intelligibility, cognition and communication abilities in adults with hearing loss, with or without hearing aids or cochlear implants. Methods A systematic search of eight databases and key journals identified 229 articles published since 1996, 13 of which met the inclusion criteria. Data were independently extracted and reviewed by the two authors. Study quality was assessed using ten pre-defined scientific and intervention-specific measures. Results Auditory training resulted in improved performance for trained tasks in 9/10 articles that reported on-task outcomes. Although significant generalisation of learning was shown to untrained measures of speech intelligibility (11/13 articles), cognition (1/1 articles) and self-reported hearing abilities (1/2 articles), improvements were small and not robust. Where reported, compliance with computer-based auditory training was high, and retention of learning was shown at post-training follow-ups. Published evidence was of very-low to moderate study quality. Conclusions Our findings demonstrate that published evidence for the efficacy of individual computer-based auditory training for adults with hearing loss is not robust and therefore cannot be reliably used to guide intervention at this time. We identify a need for high-quality evidence to further examine the efficacy of computer-based auditory training for people with hearing loss. PMID:23675431
Active Coupled Oscillators in the Inner Ear
NASA Astrophysics Data System (ADS)
Strimbu, Clark Elliott
Auditory and vestibular systems are endowed with an active process that enables them to detect signals as small as a few Angstroms; they also exhibit frequency selectivity; show strong nonlinearities; and can exhibit as spontaneous activity. Much of this active process comes from the sensory hair cells at the periphery of the auditory and vestibular systems. Each hair cell is capped by an eponymous hair bundle, a specialized structure that transduces mechanical forces into electrical signals. Experiments on mechanically decoupled cells from the frog sacculus have shown that individual hair bundles behave in an active manner analogous to an intact organ suggesting a common cellular basis for the active processes seen in many species. In particular, mechanically decoupled hair bundles show rapid active movements in response to transient stimuli and exhibit spontaneous oscillations. However, a single mechanosensitive hair cell is unable to match the performance of an entire organ. In vivo, hair bundles are often coupled to overlying membranes, gelatinous extracellular matrices. We used an in vitro preparation of the frog sacculus in which the otolithic membrane has been left intact. Under natural coupling conditions, there is a strong degree of correlation across the saccular epithelium, suggesting that the collective response of many cells contributes to the extreme sensitivity of this organ. When the membrane is left intact, the hair bundles do not oscillate spontaneously, showing that the natural coupling and loading tunes them into a quiescent regime. However, when stimulated by a pulse, the bundles show a rapid biphasic response that is abolished when the transduction channels are blocked. The active forces generated by the bundles are sufficient to move the overlying membrane.
Marijuana and Human Performance: An Annotated Bibliography (1970-1975)
1976-03-01
Research 5 6 9 20 22 48 56 61 62 72 73 128 131 132 134 163 Auditory Related Research 22 70 I’l 130 134 169 175 IV MEDICAL COMMENTS AND RESEARCH CRITIQUES... Auditory and visual threshold effects of marihuana in man. Perceptual & Motor Skills, 1969, 29, 755-759. Auditory and visual thresholds were measured...a "high." Results indicated no effect on visual acuity, whereas one of three auditory measurements differentiated between marihuana and control
Degradation of chloramphenicol by potassium ferrate (VI) oxidation: kinetics and products.
Zhou, Jia-Heng; Chen, Kai-Bo; Hong, Qian-Kun; Zeng, Fan-Cheng; Wang, Hong-Yu
2017-04-01
The oxidation of chloramphenicol (CAP) by potassium ferrate (VI) in test solution was studied in this paper. A series of jar tests were performed at bench scale with pH of 5-9 and molar ratio [VI/CAP] of 16.3:1-81.6:1. Results showed that raising VI dose could improve the treatment performance and the influence of solution pH was significant. VI is more reactive in neutral conditions, presenting the highest removal efficiency of CAP. The rate law for the oxidation of CAP by VI was first order with respect to each reactant, yielding an overall second-order reaction. Furthermore, five oxidation products were observed during CAP oxidation by VI. Results revealed that VI attacked the amide group of CAP, leading to the cleavage of the group, while benzene ring remained intact.
Ghai, Shashank; Ghai, Ishan
2018-01-01
Rhythmic auditory cueing has been shown to enhance gait performance in several movement disorders. The “entrainment effect” generated by the stimulations can enhance auditory motor coupling and instigate plasticity. However, a consensus as to its influence over gait training among patients with multiple sclerosis is still warranted. A systematic review and meta-analysis was carried out to analyze the effects of rhythmic auditory cueing in studies gait performance in patients with multiple sclerosis. This systematic identification of published literature was performed according to PRISMA guidelines, from inception until Dec 2017, on online databases: Web of science, PEDro, EBSCO, MEDLINE, Cochrane, EMBASE, and PROQUEST. Studies were critically appraised using PEDro scale. Of 602 records, five studies (PEDro score: 5.7 ± 1.3) involving 188 participants (144 females/40 males) met our inclusion criteria. The meta-analysis revealed enhancements in spatiotemporal parameters of gait i.e., velocity (Hedge's g: 0.67), stride length (0.70), and cadence (1.0), and reduction in timed 25 feet walking test (−0.17). Underlying neurophysiological mechanisms, and clinical implications are discussed. This present review bridges the gaps in literature by suggesting application of rhythmic auditory cueing in conventional rehabilitation approaches to enhance gait performance in the multiple sclerosis community. PMID:29942278
Aurally aided visual search performance in a dynamic environment
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.
2008-04-01
Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.
Relation between measures of speech-in-noise performance and measures of efferent activity
NASA Astrophysics Data System (ADS)
Smith, Brad; Harkrider, Ashley; Burchfield, Samuel; Nabelek, Anna
2003-04-01
Individual differences in auditory perceptual abilities in noise are well documented but the factors causing such variability are unclear. The purpose of this study was to determine if individual differences in responses measured from the auditory efferent system were correlated to individual variations in speech-in-noise performance. The relation between behavioral performance on three speech-in-noise tasks and two objective measures of the efferent auditory system were examined in thirty normal-hearing, young adults. Two of the speech-in-noise tasks measured an acceptable noise level, the maximum level of speech-babble noise that a subject is willing to accept while listening to a story. For these, the acceptable noise level was evaluated using both an ipsilateral (story and noise in same ear) and a contralateral (story and noise in opposite ears) paradigm. The third speech-in-noise task evaluated speech recognition using monosyllabic words presented in competing speech babble. Auditory efferent activity was assessed by examining the resulting suppression of click-evoked otoacoustic emissions following the introduction of a contralateral, broad-band stimulus and the activity of the ipsilateral and contralateral acoustic reflex arc was evaluated using tones and broad-band noise. Results will be discussed relative to current theories of speech in noise performance and auditory inhibitory processes.
Auditory short-term memory activation during score reading.
Simoens, Veerle L; Tervaniemi, Mari
2013-01-01
Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.
Auditory Short-Term Memory Activation during Score Reading
Simoens, Veerle L.; Tervaniemi, Mari
2013-01-01
Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback. PMID:23326487
ERIC Educational Resources Information Center
Reinertsen, Gloria M.
A study compared performances on a test of selective auditory attention between students educated in open-space versus closed classroom environments. An open-space classroom environment was defined as having no walls separating it from hallways or other classrooms. It was hypothesized that the incidence of auditory figure-ground (ability to focus…
Switching in the Cocktail Party: Exploring Intentional Control of Auditory Selective Attention
ERIC Educational Resources Information Center
Koch, Iring; Lawo, Vera; Fels, Janina; Vorlander, Michael
2011-01-01
Using a novel variant of dichotic selective listening, we examined the control of auditory selective attention. In our task, subjects had to respond selectively to one of two simultaneously presented auditory stimuli (number words), always spoken by a female and a male speaker, by performing a numerical size categorization. The gender of the…
Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss
ERIC Educational Resources Information Center
Koravand, Amineh; Jutras, Benoit
2013-01-01
Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…
Petrini, Karin; Crabbe, Frances; Sheridan, Carol; Pollick, Frank E
2011-04-29
In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music), visual (musician's movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound) than for emotionally matching music performances (combining the musician's movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.
Jongman, Suzanne R; Roelofs, Ardi; Scheper, Annette R; Meyer, Antje S
2017-05-01
Children with specific language impairment (SLI) have problems not only with language performance but also with sustained attention, which is the ability to maintain alertness over an extended period of time. Although there is consensus that this ability is impaired with respect to processing stimuli in the auditory perceptual modality, conflicting evidence exists concerning the visual modality. To address the outstanding issue whether the impairment in sustained attention is limited to the auditory domain, or if it is domain-general. Furthermore, to test whether children's sustained attention ability relates to their word-production skills. Groups of 7-9 year olds with SLI (N = 28) and typically developing (TD) children (N = 22) performed a picture-naming task and two sustained attention tasks, namely auditory and visual continuous performance tasks (CPTs). Children with SLI performed worse than TD children on picture naming and on both the auditory and visual CPTs. Moreover, performance on both the CPTs correlated with picture-naming latencies across developmental groups. These results provide evidence for a deficit in both auditory and visual sustained attention in children with SLI. Moreover, the study indicates there is a relationship between domain-general sustained attention and picture-naming performance in both TD and language-impaired children. Future studies should establish whether this relationship is causal. If attention influences language, training of sustained attention may improve language production in children from both developmental groups. © 2016 Royal College of Speech and Language Therapists.
The Measurement of Auditory Abilities of Blind, Partially Sighted, and Sighted Children.
ERIC Educational Resources Information Center
Stankov, Lazar; Spilsbury, Georgina
1979-01-01
Auditory tests were administered to 30 blind, partially sighted, and sighted children. Overall, the blind and sighted were equal on most of the measured abilities. Blind children performed well on tonal memory tests. Partially sighted children performed more poorly than the other two groups. (MH)
Daikoku, Tatsuya; Takahashi, Yuji; Futagami, Hiroko; Tarumoto, Nagayoshi; Yasuda, Hideki
2017-02-01
In real-world auditory environments, humans are exposed to overlapping auditory information such as those made by human voices and musical instruments even during routine physical activities such as walking and cycling. The present study investigated how concurrent physical exercise affects performance of incidental and intentional learning of overlapping auditory streams, and whether physical fitness modulates the performances of learning. Participants were grouped with 11 participants with lower and higher fitness each, based on their Vo 2 max value. They were presented simultaneous auditory sequences with a distinct statistical regularity each other (i.e. statistical learning), while they were pedaling on the bike and seating on a bike at rest. In experiment 1, they were instructed to attend to one of the two sequences and ignore to the other sequence. In experiment 2, they were instructed to attend to both of the two sequences. After exposure to the sequences, learning effects were evaluated by familiarity test. In the experiment 1, performance of statistical learning of ignored sequences during concurrent pedaling could be higher in the participants with high than low physical fitness, whereas in attended sequence, there was no significant difference in performance of statistical learning between high than low physical fitness. Furthermore, there was no significant effect of physical fitness on learning while resting. In the experiment 2, the both participants with high and low physical fitness could perform intentional statistical learning of two simultaneous sequences in the both exercise and rest sessions. The improvement in physical fitness might facilitate incidental but not intentional statistical learning of simultaneous auditory sequences during concurrent physical exercise.
Berti, Stefan
2013-01-01
Distraction of goal-oriented performance by a sudden change in the auditory environment is an everyday life experience. Different types of changes can be distracting, including a sudden onset of a transient sound and a slight deviation of otherwise regular auditory background stimulation. With regard to deviance detection, it is assumed that slight changes in a continuous sequence of auditory stimuli are detected by a predictive coding mechanisms and it has been demonstrated that this mechanism is capable of distracting ongoing task performance. In contrast, it is open whether transient detection—which does not rely on predictive coding mechanisms—can trigger behavioral distraction, too. In the present study, the effect of rare auditory changes on visual task performance is tested in an auditory-visual cross-modal distraction paradigm. The rare changes are either embedded within a continuous standard stimulation (triggering deviance detection) or are presented within an otherwise silent situation (triggering transient detection). In the event-related brain potentials, deviants elicited the mismatch negativity (MMN) while transients elicited an enhanced N1 component, mirroring pre-attentive change detection in both conditions but on the basis of different neuro-cognitive processes. These sensory components are followed by attention related ERP components including the P3a and the reorienting negativity (RON). This demonstrates that both types of changes trigger switches of attention. Finally, distraction of task performance is observable, too, but the impact of deviants is higher compared to transients. These findings suggest different routes of distraction allowing for the automatic processing of a wide range of potentially relevant changes in the environment as a pre-requisite for adaptive behavior. PMID:23874278
Noise-induced tinnitus: auditory evoked potential in symptomatic and asymptomatic patients.
Santos-Filha, Valdete Alves Valentins dos; Samelli, Alessandra Giannella; Matas, Carla Gentile
2014-07-01
We evaluated the central auditory pathways in workers with noise-induced tinnitus with normal hearing thresholds, compared the auditory brainstem response results in groups with and without tinnitus and correlated the tinnitus location to the auditory brainstem response findings in individuals with a history of occupational noise exposure. Sixty individuals participated in the study and the following procedures were performed: anamnesis, immittance measures, pure-tone air conduction thresholds at all frequencies between 0.25-8 kHz and auditory brainstem response. The mean auditory brainstem response latencies were lower in the Control group than in the Tinnitus group, but no significant differences between the groups were observed. Qualitative analysis showed more alterations in the lower brainstem in the Tinnitus group. The strongest relationship between tinnitus location and auditory brainstem response alterations was detected in individuals with bilateral tinnitus and bilateral auditory brainstem response alterations compared with patients with unilateral alterations. Our findings suggest the occurrence of a possible dysfunction in the central auditory nervous system (brainstem) in individuals with noise-induced tinnitus and a normal hearing threshold.
Pondé, Pedro H; de Sena, Eduardo P; Camprodon, Joan A; de Araújo, Arão Nogueira; Neto, Mário F; DiBiasi, Melany; Baptista, Abrahão Fontes; Moura, Lidia MVR; Cosmo, Camila
2017-01-01
Introduction Auditory hallucinations are defined as experiences of auditory perceptions in the absence of a provoking external stimulus. They are the most prevalent symptoms of schizophrenia with high capacity for chronicity and refractoriness during the course of disease. The transcranial direct current stimulation (tDCS) – a safe, portable, and inexpensive neuromodulation technique – has emerged as a promising treatment for the management of auditory hallucinations. Objective The aim of this study is to analyze the level of evidence in the literature available for the use of tDCS as a treatment for auditory hallucinations in schizophrenia. Methods A systematic review was performed, searching in the main electronic databases including the Cochrane Library and MEDLINE/PubMed. The searches were performed by combining descriptors, applying terms of the Medical Subject Headings (MeSH) of Descriptors of Health Sciences and descriptors contractions. PRISMA protocol was used as a guide and the terms used were the clinical outcomes (“Schizophrenia” OR “Auditory Hallucinations” OR “Auditory Verbal Hallucinations” OR “Psychosis”) searched together (“AND”) with interventions (“transcranial Direct Current Stimulation” OR “tDCS” OR “Brain Polarization”). Results Six randomized controlled trials that evaluated the effects of tDCS on the severity of auditory hallucinations in schizophrenic patients were selected. Analysis of the clinical results of these studies pointed toward incongruence in the information with regard to the therapeutic use of tDCS with a view to reducing the severity of auditory hallucinations in schizophrenia. Only three studies revealed a therapeutic benefit, manifested by reductions in severity and frequency of auditory verbal hallucinations in schizophrenic patients. Conclusion Although tDCS has shown promising results in reducing the severity of auditory hallucinations in schizophrenic patients, this technique cannot yet be used as a therapeutic alternative due to lack of studies with large sample sizes that portray the positive effects that have been described. PMID:28203084
Auditory reafferences: the influence of real-time feedback on movement control.
Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus
2015-01-01
Auditory reafferences are real-time auditory products created by a person's own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action-perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.
Electrophysiological Evidence for the Sources of the Masking Level Difference.
Fowler, Cynthia G
2017-08-16
The purpose of this review article is to review evidence from auditory evoked potential studies to describe the contributions of the auditory brainstem and cortex to the generation of the masking level difference (MLD). A literature review was performed, focusing on the auditory brainstem, middle, and late latency responses used in protocols similar to those used to generate the behavioral MLD. Temporal coding of the signals necessary for generating the MLD occurs in the auditory periphery and brainstem. Brainstem disorders up to wave III of the auditory brainstem response (ABR) can disrupt the MLD. The full MLD requires input to the generators of the auditory late latency potentials to produce all characteristics of the MLD; these characteristics include threshold differences for various binaural signal and noise conditions. Studies using central auditory lesions are beginning to identify the cortical effects on the MLD. The MLD requires auditory processing from the periphery to cortical areas. A healthy auditory periphery and brainstem codes temporal synchrony, which is essential for the ABR. Threshold differences require engaging cortical function beyond the primary auditory cortex. More studies using cortical lesions and evoked potentials or imaging should clarify the specific cortical areas involved in the MLD.
Cogné, Mélanie; Violleau, Marie-Hélène; Klinger, Evelyne; Joseph, Pierre-Alain
2018-01-31
Topographical disorientation is frequent among patients after a stroke and can be well explored with virtual environments (VEs). VEs also allow for the addition of stimuli. A previous study did not find any effect of non-contextual auditory stimuli on navigational performance in the virtual action planning-supermarket (VAP-S) simulating a medium-sized 3D supermarket. However, the perceptual or cognitive load of the sounds used was not high. We investigated how non-contextual auditory stimuli with high load affect navigational performance in the VAP-S for patients who have had a stroke and any correlation between this performance and dysexecutive disorders. Four kinds of stimuli were considered: sounds from living beings, sounds from supermarket objects, beeping sounds and names of other products that were not available in the VAP-S. The condition without auditory stimuli was the control. The Groupe de réflexion pour l'évaluation des fonctions exécutives (GREFEX) battery was used to evaluate executive functions of patients. The study included 40 patients who have had a stroke (n=22 right-hemisphere and n=18 left-hemisphere stroke). Patients' navigational performance was decreased under the 4 conditions with non-contextual auditory stimuli (P<0.05), especially for those with dysexecutive disorders. For the 5 conditions, the lower the performance, the more GREFEX tests were failed. Patients felt significantly disadvantaged by the non-contextual sounds sounds from living beings, sounds from supermarket objects and names of other products as compared with beeping sounds (P<0.01). Patients' verbal recall of the collected objects was significantly lower under the condition with names of other products (P<0.001). Left and right brain-damaged patients did not differ in navigational performance in the VAP-S under the 5 auditory conditions. These non-contextual auditory stimuli could be used in neurorehabilitation paradigms to train patients with dysexecutive disorders to inhibit disruptive stimuli. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Rhythm synchronization performance and auditory working memory in early- and late-trained musicians.
Bailey, Jennifer A; Penhune, Virginia B
2010-07-01
Behavioural and neuroimaging studies provide evidence for a possible "sensitive" period in childhood development during which musical training results in long-lasting changes in brain structure and auditory and motor performance. Previous work from our laboratory has shown that adult musicians who begin training before the age of 7 (early-trained; ET) perform better on a visuomotor task than those who begin after the age of 7 (late-trained; LT), even when matched on total years of musical training and experience. Two questions were raised regarding the findings from this experiment. First, would this group performance difference be observed using a more familiar, musically relevant task such as auditory rhythms? Second, would cognitive abilities mediate this difference in task performance? To address these questions, ET and LT musicians, matched on years of musical training, hours of current practice and experience, were tested on an auditory rhythm synchronization task. The task consisted of six woodblock rhythms of varying levels of metrical complexity. In addition, participants were tested on cognitive subtests measuring vocabulary, working memory and pattern recognition. The two groups of musicians differed in their performance of the rhythm task, such that the ET musicians were better at reproducing the temporal structure of the rhythms. There were no group differences on the cognitive measures. Interestingly, across both groups, individual task performance correlated with auditory working memory abilities and years of formal training. These results support the idea of a sensitive period during the early years of childhood for developing sensorimotor synchronization abilities via musical training.
The neural basis of visual dominance in the context of audio-visual object processing.
Schmid, Carmen; Büchel, Christian; Rose, Michael
2011-03-01
Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.
Statistical learning and auditory processing in children with music training: An ERP study.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne
2017-07-01
The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao
2015-04-15
This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.
Bouncing Ball with a Uniformly Varying Velocity in a Metronome Synchronization Task.
Huang, Yingyu; Gu, Li; Yang, Junkai; Wu, Xiang
2017-09-21
Sensorimotor synchronization (SMS), a fundamental human ability to coordinate movements with external rhythms, has long been thought to be modality specific. In the canonical metronome synchronization task that requires tapping a finger along with an isochronous sequence, a well-established finding is that synchronization is much more stable to an auditory sequence consisting of auditory tones than to a visual sequence consisting of visual flashes. However, recent studies have shown that periodically moving visual stimuli can substantially improve synchronization compared with visual flashes. In particular, synchronization of a visual bouncing ball that has a uniformly varying velocity was found to be not less stable than synchronization of auditory tones. Here, the current protocol describes the application of the bouncing ball with a uniformly varying velocity in a metronome synchronization task. The usage of the bouncing ball in sequences with different inter-onset intervals (IOI) is included. The representative results illustrate synchronization performance of the bouncing ball, as compared with the performances of auditory tones and visual flashes. Given its comparable synchronization performance to that of auditory tones, the bouncing ball is of particular importance for addressing the current research topic of whether modality-specific mechanisms underlay SMS.
Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina
2013-02-01
Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.
Vanniasegaram, Iyngaram; Cohen, Mazal; Rosen, Stuart
2004-12-01
To compare the auditory function of normal-hearing children attending mainstream schools who were referred for an auditory evaluation because of listening/hearing problems (suspected auditory processing disorders [susAPD]) with that of normal-hearing control children. Sixty-five children with a normal standard audiometric evaluation, ages 6-14 yr (32 of whom were referred for susAPD, with the rest age-matched control children), completed a battery of four auditory tests: a dichotic test of competing sentences; a simple discrimination of short tone pairs differing in fundamental frequency at varying interstimulus intervals (TDT); a discrimination task using consonant cluster minimal pairs of real words (CCMP), and an adaptive threshold task for detecting a brief tone presented either simultaneously with a masker (simultaneous masking) or immediately preceding it (backward masking). Regression analyses, including age as a covariate, were performed to determine the extent to which the performance of the two groups differed on each task. Age-corrected z-scores were calculated to evaluate the effectiveness of the complete battery in discriminating the groups. The performance of the susAPD group was significantly poorer than the control group on all but the masking tasks, which failed to differentiate the two groups. The CCMP discriminated the groups most effectively, as it yielded the lowest number of control children with abnormal scores, and performance in both groups was independent of age. By contrast, the proportion of control children who performed poorly on the competing sentences test was unacceptably high. Together, the CCMP (verbal) and TDT (nonverbal) tasks detected impaired listening skills in 56% of the children who were referred to the clinic, compared with 6% of the control children. Performance on the two tasks was not correlated. Two of the four tests evaluated, the CCMP and TDT, proved effective in differentiating the two groups of children of this study. The application of both tests increased the proportion of susAPD children who performed poorly compared with the application of each test alone, while reducing the proportion of control subjects who performed poorly. The findings highlight the importance of carrying out a complete auditory evaluation in children referred for medical attention, even if their standard audiometric evaluation is unremarkable.
Yoshimura, Yuko; Kikuchi, Mitsuru; Shitamichi, Kiyomi; Ueno, Sanae; Munesue, Toshio; Ono, Yasuki; Tsubokawa, Tsunehisa; Haruta, Yasuhiro; Oi, Manabu; Niida, Yo; Remijn, Gerard B; Takahashi, Tsutomu; Suzuki, Michio; Higashida, Haruhiro; Minabe, Yoshio
2013-10-08
Magnetoencephalography (MEG) is used to measure the auditory evoked magnetic field (AEF), which reflects language-related performance. In young children, however, the simultaneous quantification of the bilateral auditory-evoked response during binaural hearing is difficult using conventional adult-sized MEG systems. Recently, a child-customised MEG device has facilitated the acquisition of bi-hemispheric recordings, even in young children. Using the child-customised MEG device, we previously reported that language-related performance was reflected in the strength of the early component (P50m) of the auditory evoked magnetic field (AEF) in typically developing (TD) young children (2 to 5 years old) [Eur J Neurosci 2012, 35:644-650]. The aim of this study was to investigate how this neurophysiological index in each hemisphere is correlated with language performance in autism spectrum disorder (ASD) and TD children. We used magnetoencephalography (MEG) to measure the auditory evoked magnetic field (AEF), which reflects language-related performance. We investigated the P50m that is evoked by voice stimuli (/ne/) bilaterally in 33 young children (3 to 7 years old) with ASD and in 30 young children who were typically developing (TD). The children were matched according to their age (in months) and gender. Most of the children with ASD were high-functioning subjects. The results showed that the children with ASD exhibited significantly less leftward lateralisation in their P50m intensity compared with the TD children. Furthermore, the results of a multiple regression analysis indicated that a shorter P50m latency in both hemispheres was specifically correlated with higher language-related performance in the TD children, whereas this latency was not correlated with non-verbal cognitive performance or chronological age. The children with ASD did not show any correlation between P50m latency and language-related performance; instead, increasing chronological age was a significant predictor of shorter P50m latency in the right hemisphere. Using a child-customised MEG device, we studied the P50m component that was evoked through binaural human voice stimuli in young ASD and TD children to examine differences in auditory cortex function that are associated with language development. Our results suggest that there is atypical brain function in the auditory cortex in young children with ASD, regardless of language development.
Examining age-related differences in auditory attention control using a task-switching procedure.
Lawo, Vera; Koch, Iring
2014-03-01
Using a novel task-switching variant of dichotic selective listening, we examined age-related differences in the ability to intentionally switch auditory attention between 2 speakers defined by their sex. In our task, young (M age = 23.2 years) and older adults (M age = 66.6 years) performed a numerical size categorization on spoken number words. The task-relevant speaker was indicated by a cue prior to auditory stimulus onset. The cuing interval was either short or long and varied randomly trial by trial. We found clear performance costs with instructed attention switches. These auditory attention switch costs decreased with prolonged cue-stimulus interval. Older adults were generally much slower (but not more error prone) than young adults, but switching-related effects did not differ across age groups. These data suggest that the ability to intentionally switch auditory attention in a selective listening task is not compromised in healthy aging. We discuss the role of modality-specific factors in age-related differences.
Differential Effects of Music and Video Gaming During Breaks on Auditory and Visual Learning.
Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A
2015-11-01
The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning.
Scully, Erin N; Schuldhaus, Brenna C; Congdon, Jenna V; Hahn, Allison H; Campbell, Kimberley A; Wilson, David R; Sturdy, Christopher B
2018-06-08
Black-capped chickadees (Poecile atricapillus) use their namesake chick-a-dee call for multiple functions, altering the features of the call depending on context. For example, duty cycle (the proportion of time filled by vocalizations) and fine structure traits (e.g., number of D notes) can encode contextual factors, such as predator size and food quality. Wilson and Mennill [1] found that chickadees show stronger behavioral responses to playback of chick-a-dee calls with higher duty cycles, but not to the number of D notes. That is, independent of the number of D notes in a call, but dependent on the overall proportion of time filled with vocalization, birds responded more to higher duty cycle playback compared to lower duty cycle playback. Here we presented chickadees with chick-a-dee calls that contained either two D (referred to hereafter as 2 D) notes with a low duty cycle, 2 D notes with a high duty cycle, 10 D notes with a high duty cycle, or 2 D notes with a high duty cycle but played in reverse (a non-signaling control). We then measured ZENK expression in the auditory nuclei where perceptual discrimination is thought to occur. Based on the behavioral results of Wilson and Mennill [1], we predicted we would observe the highest ZENK expression in response to forward-playing calls with high duty cycles; we predicted we would observe no significant difference in ZENK expression between forward-playing high duty cycle playbacks (2 D or 10 D). We found no significant difference between forward-playing 2 D and 10 D high duty cycle playbacks. However, contrary to our predictions, we did not find any effects of altering the duty cycle or note number presented. Copyright © 2018 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Performance. 19.9 Section 19.9 Mineral... MINING PRODUCTS ELECTRIC CAP LAMPS § 19.9 Performance. In addition to the general design and the safety... respect to performance, as follows: (a) Time of burning and candlepower. Permissible electric cap lamps...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Performance. 19.9 Section 19.9 Mineral... MINING PRODUCTS ELECTRIC CAP LAMPS § 19.9 Performance. In addition to the general design and the safety... respect to performance, as follows: (a) Time of burning and candlepower. Permissible electric cap lamps...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Performance. 19.9 Section 19.9 Mineral... MINING PRODUCTS ELECTRIC CAP LAMPS § 19.9 Performance. In addition to the general design and the safety... respect to performance, as follows: (a) Time of burning and candlepower. Permissible electric cap lamps...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Performance. 19.9 Section 19.9 Mineral... MINING PRODUCTS ELECTRIC CAP LAMPS § 19.9 Performance. In addition to the general design and the safety... respect to performance, as follows: (a) Time of burning and candlepower. Permissible electric cap lamps...
ERIC Educational Resources Information Center
Iliadou, Vasiliki; Bamiou, Doris Eva
2012-01-01
Purpose: To investigate the clinical utility of the Children's Auditory Processing Performance Scale (CHAPPS; Smoski, Brunt, & Tannahill, 1992) to evaluate listening ability in 12-year-old children referred for auditory processing assessment. Method: This was a prospective case control study of 97 children (age range = 11;4 [years;months] to…
ERIC Educational Resources Information Center
Osnes, Berge; Hugdahl, Kenneth; Hjelmervik, Helene; Specht, Karsten
2012-01-01
In studies on auditory speech perception, participants are often asked to perform active tasks, e.g. decide whether the perceived sound is a speech sound or not. However, information about the stimulus, inherent in such tasks, may induce expectations that cause altered activations not only in the auditory cortex, but also in frontal areas such as…
ERIC Educational Resources Information Center
Ceponiene, Rita; Service, Elisabet; Kurjenluoma, Sanna; Cheour, Marie; Naatanen, Risto
1999-01-01
Compared the mismatch-negativity (MMN) component of auditory event-related brain potentials to explore the relationship between phonological short-term memory and auditory-sensory processing in 7- to 9-year olds scoring the highest and lowest on a pseudoword repetition test. Found that high and low repeaters differed in MMN amplitude to speech…
... Dyscalculia is defined as difficulty performing mathematical calculations. Math is problematic for many students, but dyscalculia may prevent a teenager from grasping even basic math concepts. Auditory Memory and Processing Disabilities Auditory memory ...
Further evidence of auditory extinction in aphasia.
Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim
2013-02-01
Preliminary research (Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Seventeen IWA (M(age) = 53.19 years) and 17 neurologically intact controls (M(age) = 55.18 years) participated. Auditory stimuli were spoken letters presented in a free-field listening environment. Stimuli were presented in single-stimulus stimulation (SSS) or double-simultaneous stimulation (DSS) trials across 5 conditions designed to determine whether extinction is related to binding, inefficient attention resource allocation, or overall deficits in attention. All participants completed all experimental conditions. Significant extinction was demonstrated only by IWA when sounds were different, providing further evidence of auditory extinction. However, binding requirements did not appear to influence the IWA's performance. Results indicate that, for IWA, auditory extinction may not be attributed to a binding deficit or inefficient attention resource allocation because of equivalent performance across all 5 conditions. Rather, overall attentional resources may be influential. Future research in aphasia should explore the effect of the stimulus presentation in addition to the continued study of attention treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nally, J.V.; Clarke, H.S.; Grecos, G.P.
To assess the effect of CAP on individual kidney function in ..mu..RAS, the authors compared computer assisted 90 second and 15 minute /sup 99m/Tc-DTPA renal flow studies vs /sup 131/I-Hippuran renography with and without CAP. In Group 1 (n=10), angiograms, split function C/sub PAH/, DTPA and Hippuran studies were performed in dogs pre and post ..mu..RAS. Group II animals (n=8) with milder stenosis underwent the same protocol, plus DTPA and Hippuran studies, C/sub PAH/, and C/sub IN/ were performed during CAP (Captopril 1.5 mg/kg bolus and 1.5 mg/min x 60 min.) Recovery DTPA and Hippuran studies (Rec) were performed andmore » were also obtained using nitroprusside (NP) to lower MP to a similar degree as CAP. The authors conclude /sup 99m/Tc-DTPA studies proved superior to Hipurran renography in both Groups I and II. With mild ..mu..RAS, CAP induced a decrease in ipsilateral GFR resulting in striking changes in the /sup 99m/Tc-DTPA curves such that all were now diagnostic of uRAS. These changes appeared specific for CAP and independent of MAP reduction with NP, and /sup 99m/Tc-DTPA renal flow studies with CAP unmask unilateral angiotension II dependent renal hemodynamic changes.« less
Can Spectro-Temporal Complexity Explain the Autistic Pattern of Performance on Auditory Tasks?
ERIC Educational Resources Information Center
Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter
2006-01-01
To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material…
ERIC Educational Resources Information Center
Thackray, Richard I.; And Others
The ability to resist distraction is an important requirement for air traffic controllers. The study examined the relationship between performance on the Stroop color-word interference test (a suggested measure of distraction susceptibility) and impairment under auditory distraction on a task requiring the subject to generate random sequences of…
Lifespan Differences in Nonlinear Dynamics during Rest and Auditory Oddball Performance
ERIC Educational Resources Information Center
Muller, Viktor; Lindenberger, Ulman
2012-01-01
Electroencephalographic recordings (EEG) were used to assess age-associated differences in nonlinear brain dynamics during both rest and auditory oddball performance in children aged 9.0-12.8 years, younger adults, and older adults. We computed nonlinear coupling dynamics and dimensional complexity, and also determined spectral alpha power as an…
ERIC Educational Resources Information Center
Meng, Xiangzhi; Sai, Xiaoguang; Wang, Cixin; Wang, Jue; Sha, Shuying; Zhou, Xiaolin
2005-01-01
By measuring behavioural performance and event-related potentials (ERPs) this study investigated the extent to which Chinese school children's reading development is influenced by their skills in auditory, speech, and temporal processing. In Experiment 1, 102 normal school children's performance in pure tone temporal order judgment, tone frequency…
Effects of Visual, Auditory, and Tactile Alerts on Platoon Leader Performance and Decision Making
2005-12-01
Effects of Visual, Auditory, and Tactile Alerts on Platoon Leader Performance and Decision Making by Andrea S . Krausman, Linda R. Elliott...Tactile Alerts on Platoon Leader Performance and Decision Making Andrea S . Krausman, Linda R. Elliott, and Rodger A. Pettitt Human Research and...Platoon Leader Performance and Decision Making 5c. PROGRAM ELEMENT NUMBER 5d. PROJECT NUMBER 62716AH70 5e. TASK NUMBER 6. AUTHOR( S
Deshpande, Aniruddha K; Tan, Lirong; Lu, Long J; Altaye, Mekibib; Holland, Scott K
2018-05-01
The trends in cochlear implantation candidacy and benefit have changed rapidly in the last two decades. It is now widely accepted that early implantation leads to better postimplant outcomes. Although some generalizations can be made about postimplant auditory and language performance, neural mechanisms need to be studied to predict individual prognosis. The aim of this study was to use functional magnetic resonance imaging (fMRI) to identify preimplant neuroimaging biomarkers that predict children's postimplant auditory and language outcomes as measured by parental observation/reports. This is a pre-post correlational measures study. Twelve possible cochlear implant candidates with bilateral severe to profound hearing loss were recruited via referrals for a clinical magnetic resonance imaging to ensure structural integrity of the auditory nerve for implantation. Participants underwent cochlear implantation at a mean age of 19.4 mo. All children used the advanced combination encoder strategy (ACE, Cochlear Corporation™, Nucleus ® Freedom cochlear implants). Three participants received an implant in the right ear; one in the left ear whereas eight participants received bilateral implants. Participants' preimplant neuronal activation in response to two auditory stimuli was studied using an event-related fMRI method. Blood oxygen level dependent contrast maps were calculated for speech and noise stimuli. The general linear model was used to create z-maps. The Auditory Skills Checklist (ASC) and the SKI-HI Language Development Scale (SKI-HI LDS) were administered to the parents 2 yr after implantation. A nonparametric correlation analysis was implemented between preimplant fMRI activation and postimplant auditory and language outcomes based on ASC and SKI-HI LDS. Statistical Parametric Mapping software was used to create regression maps between fMRI activation and scores on the aforementioned tests. Regression maps were overlaid on the Imaging Research Center infant template and visualized in MRIcro. Regression maps revealed two clusters of brain activation for the speech versus silence contrast and five clusters for the noise versus silence contrast that were significantly correlated with the parental reports. These clusters included auditory and extra-auditory regions such as the middle temporal gyrus, supramarginal gyrus, precuneus, cingulate gyrus, middle frontal gyrus, subgyral, and middle occipital gyrus. Both positive and negative correlations were observed. Correlation values for the different clusters ranged from -0.90 to 0.95 and were significant at a corrected p value of <0.05. Correlations suggest that postimplant performance may be predicted by activation in specific brain regions. The results of the present study suggest that (1) fMRI can be used to identify neuroimaging biomarkers of auditory and language performance before implantation and (2) activation in certain brain regions may be predictive of postimplant auditory and language performance as measured by parental observation/reports. American Academy of Audiology.
First branchial cleft sinus presenting with cholesteatoma and external auditory canal atresia.
Yalçin, Sinasi; Karlidağ, Turgut; Kaygusuz, Irfan; Demirbağ, Erhan
2003-07-01
First branchial cleft abnormalities are rare. They may involve the external auditory canal and middle ear. We describe a 6-year-old girl with congenital external auditory canal atresia, microtia, and cholesteatoma of mastoid and middle ear in addition to the first branchial cleft abnormalities. Clinical features of the patient are briefly described and the embryological relationship between first branchial cleft anomaly and external auditory canal atresia is discussed. The surgical management of these lesions may be performed, both the complete excision of the sinus and reconstructive otologic surgery.
The plastic ear and perceptual relearning in auditory spatial perception
Carlile, Simon
2014-01-01
The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497
Debellemaniere, Eden; Chambon, Stanislas; Pinaud, Clemence; Thorey, Valentin; Dehaene, David; Léger, Damien; Chennaoui, Mounir; Arnal, Pierrick J.; Galtier, Mathieu N.
2018-01-01
Recent research has shown that auditory closed-loop stimulation can enhance sleep slow oscillations (SO) to improve N3 sleep quality and cognition. Previous studies have been conducted in lab environments. The present study aimed to validate and assess the performance of a novel ambulatory wireless dry-EEG device (WDD), for auditory closed-loop stimulation of SO during N3 sleep at home. The performance of the WDD to detect N3 sleep automatically and to send auditory closed-loop stimulation on SO were tested on 20 young healthy subjects who slept with both the WDD and a miniaturized polysomnography (part 1) in both stimulated and sham nights within a double blind, randomized and crossover design. The effects of auditory closed-loop stimulation on delta power increase were assessed after one and 10 nights of stimulation on an observational pilot study in the home environment including 90 middle-aged subjects (part 2).The first part, aimed at assessing the quality of the WDD as compared to a polysomnograph, showed that the sensitivity and specificity to automatically detect N3 sleep in real-time were 0.70 and 0.90, respectively. The stimulation accuracy of the SO ascending-phase targeting was 45 ± 52°. The second part of the study, conducted in the home environment, showed that the stimulation protocol induced an increase of 43.9% of delta power in the 4 s window following the first stimulation (including evoked potentials and SO entrainment effect). The increase of SO response to auditory stimulation remained at the same level after 10 consecutive nights. The WDD shows good performances to automatically detect in real-time N3 sleep and to send auditory closed-loop stimulation on SO accurately. These stimulation increased the SO amplitude during N3 sleep without any adaptation effect after 10 consecutive nights. This tool provides new perspectives to figure out novel sleep EEG biomarkers in longitudinal studies and can be interesting to conduct broad studies on the effects of auditory stimulation during sleep. PMID:29568267
Auditory Task Irrelevance: A Basis for Inattentional Deafness
Scheer, Menja; Bülthoff, Heinrich H.; Chuang, Lewis L.
2018-01-01
Objective This study investigates the neural basis of inattentional deafness, which could result from task irrelevance in the auditory modality. Background Humans can fail to respond to auditory alarms under high workload situations. This failure, termed inattentional deafness, is often attributed to high workload in the visual modality, which reduces one’s capacity for information processing. Besides this, our capacity for processing auditory information could also be selectively diminished if there is no obvious task relevance in the auditory channel. This could be another contributing factor given the rarity of auditory warnings. Method Forty-eight participants performed a visuomotor tracking task while auditory stimuli were presented: a frequent pure tone, an infrequent pure tone, and infrequent environmental sounds. Participants were required either to respond to the presentation of the infrequent pure tone (auditory task-relevant) or not (auditory task-irrelevant). We recorded and compared the event-related potentials (ERPs) that were generated by environmental sounds, which were always task-irrelevant for both groups. These ERPs served as an index for our participants’ awareness of the task-irrelevant auditory scene. Results Manipulation of auditory task relevance influenced the brain’s response to task-irrelevant environmental sounds. Specifically, the late novelty-P3 to irrelevant environmental sounds, which underlies working memory updating, was found to be selectively enhanced by auditory task relevance independent of visuomotor workload. Conclusion Task irrelevance in the auditory modality selectively reduces our brain’s responses to unexpected and irrelevant sounds regardless of visuomotor workload. Application Presenting relevant auditory information more often could mitigate the risk of inattentional deafness. PMID:29578754
The Role of NREM Sleep Instability in Child Cognitive Performance
Bruni, Oliviero; Kohler, Mark; Novelli, Luana; Kennedy, Declan; Lushington, Kurt; Martin, James; Ferri, Raffaele
2012-01-01
Study Objectives: Based on recent reports of the involvement of cyclic alternating pattern (CAP) in cognitive functioning in adults, we investigated the association between CAP parameters and cognitive performance in healthy children. Design: Polysomnographic assessment and standardized neurocognitive testing in healthy children. Settings: Sleep laboratory. Participants: Forty-two children aged 7.6 ± 2.7 years, with an even distribution of body mass percentile (58.5 ± 25.5) and SES reflective of national norms. Measurements: Analysis of sleep macrostructure following the R&K criteria and of cyclic alternating pattern (CAP). The neurocognitive tests were the Stanford Binet Intelligence Scale (5th edition) and a Neuropsychological Developmental Assessment (NEPSY) Results: Fluid reasoning ability was positively associated with CAP rate, particularly during SWS and with A1 total index and A1 index in SWS. Regression analysis, controlling for age and SES, showed that CAP rate in SWS and A1 index in SWS were significant predictors of nonverbal fluid reasoning, explaining 24% and 22% of the variance in test scores, respectively. Conclusion: This study shows that CAP analysis provides important insights on the role of EEG slow oscillations (CAP A1) in cognitive performance. Children with higher cognitive efficiency showed an increase of phase A1 in total sleep and in SWS Citation: Bruni O; Kohler M; Novelli L; Kennedy D; Lushington K; Martin J; Ferri R. The role of NREM sleep instability in child cognitive performance. SLEEP 2012;35(5):649-656. PMID:22547891
Binaural auditory beats affect vigilance performance and mood.
Lane, J D; Kasian, S J; Owens, J E; Marsh, G R
1998-01-01
When two tones of slightly different frequency are presented separately to the left and right ears the listener perceives a single tone that varies in amplitude at a frequency equal to the frequency difference between the two tones, a perceptual phenomenon known as the binaural auditory beat. Anecdotal reports suggest that binaural auditory beats within the electroencephalograph frequency range can entrain EEG activity and may affect states of consciousness, although few scientific studies have been published. This study compared the effects of binaural auditory beats in the EEG beta and EEG theta/delta frequency ranges on mood and on performance of a vigilance task to investigate their effects on subjective and objective measures of arousal. Participants (n = 29) performed a 30-min visual vigilance task on three different days while listening to pink noise containing simple tones or binaural beats either in the beta range (16 and 24 Hz) or the theta/delta range (1.5 and 4 Hz). However, participants were kept blind to the presence of binaural beats to control expectation effects. Presentation of beta-frequency binaural beats yielded more correct target detections and fewer false alarms than presentation of theta/delta frequency binaural beats. In addition, the beta-frequency beats were associated with less negative mood. Results suggest that the presentation of binaural auditory beats can affect psychomotor performance and mood. This technology may have applications for the control of attention and arousal and the enhancement of human performance.
Multisensory Integration Strategy for Modality-Specific Loss of Inhibition Control in Older Adults.
Lee, Ahreum; Ryu, Hokyoung; Kim, Jae-Kwan; Jeong, Eunju
2018-04-11
Older adults are known to have lesser cognitive control capability and greater susceptibility to distraction than young adults. Previous studies have reported age-related problems in selective attention and inhibitory control, yielding mixed results depending on modality and context in which stimuli and tasks were presented. The purpose of the study was to empirically demonstrate a modality-specific loss of inhibitory control in processing audio-visual information with ageing. A group of 30 young adults (mean age = 25.23, Standar Desviation (SD) = 1.86) and 22 older adults (mean age = 55.91, SD = 4.92) performed the audio-visual contour identification task (AV-CIT). We compared performance of visual/auditory identification (Uni-V, Uni-A) with that of visual/auditory identification in the presence of distraction in counterpart modality (Multi-V, Multi-A). The findings showed a modality-specific effect on inhibitory control. Uni-V performance was significantly better than Multi-V, indicating that auditory distraction significantly hampered visual target identification. However, Multi-A performance was significantly enhanced compared to Uni-A, indicating that auditory target performance was significantly enhanced by visual distraction. Additional analysis showed an age-specific effect on enhancement between Uni-A and Multi-A depending on the level of visual inhibition. Together, our findings indicated that the loss of visual inhibitory control was beneficial for the auditory target identification presented in a multimodal context in older adults. A likely multisensory information processing strategy in the older adults was further discussed in relation to aged cognition.
Cha, Yuri; Kim, Young; Hwang, Sujin; Chung, Yijung
2014-01-01
Motor relearning protocols should involve task-oriented movement, focused attention, and repetition of desired movements. To investigate the effect of intensive gait training with rhythmic auditory stimulation on postural control and gait performance in individuals with chronic hemiparetic stroke. Twenty patients with chronic hemiparetic stroke participated in this study. Subjects in the Rhythmic auditory stimulation training group (10 subjects) underwent intensive gait training with rhythmic auditory stimulation for a period of 6 weeks (30 min/day, five days/week), while those in the control group (10 subjects) underwent intensive gait training for the same duration. Two clinical measures, Berg balance scale and stroke specific quality of life scale, and a 2-demensional gait analysis system, were used as outcome measure. To provide rhythmic auditory stimulation during gait training, the MIDI Cuebase musical instrument digital interface program and a KM Player version 3.3 was utilized for this study. Intensive gait training with rhythmic auditory stimulation resulted in significant improvement in scores on the Berg balance scale, gait velocity, cadence, stride length and double support period in affected side, and stroke specific quality of life scale compared with the control group after training. Findings of this study suggest that intensive gait training with rhythmic auditory stimulation improves balance and gait performance as well as quality of life, in individuals with chronic hemiparetic stroke.
NASA Technical Reports Server (NTRS)
Uhlemann, H.; Geiser, G.
1975-01-01
Multivariable manual compensatory tracking experiments were carried out in order to determine typical strategies of the human operator and conditions for improvement of his performance if one of the visual displays of the tracking errors is supplemented by an auditory feedback. Because the tracking error of the system which is only visually displayed is found to decrease, but not in general that of the auditorally supported system, it was concluded that the auditory feedback unloads the visual system of the operator who can then concentrate on the remaining exclusively visual displays.
Applications of psychophysical models to the study of auditory development
NASA Astrophysics Data System (ADS)
Werner, Lynne
2003-04-01
Psychophysical models of listening, such as the energy detector model, have provided a framework from which to characterize the function of the mature auditory system and to explore how mature listeners make use of auditory information in sound identification. The application of such models to the study of auditory development has similarly provided insight into the characteristics of infant hearing and listening. Infants intensity, frequency, temporal and spatial resolution have been described at least grossly and some contributions of immature listening strategies to infant hearing have been identified. Infants psychoacoustic performance is typically poorer than adults under identical stimulus conditions. However, the infant's performance typically varies with stimulus condition in a way that is qualitatively similar to the adult's performance. In some cases, though, infants perform in a qualitatively different way from adults in psychoacoustic experiments. Further, recent psychoacoustic studies of children suggest that the classic models of listening may be inadequate to describe the children's performance. The characteristics of a model that might be appropriate for the immature listener will be outlined and the implications for models of mature listening will be discussed. [Work supported by NIH grants DC00396 and by DC04661.
Bloemsaat, Gijs; Van Galen, Gerard P; Meulenbroek, Ruud G J
2003-05-01
This study investigated the combined effects of orthographical irregularity and auditory memory load on the kinematics of finger movements in a transcription-typewriting task. Eight right-handed touch-typists were asked to type 80 strings of ten seven-letter words. In half the trials an irregularly spelt target word elicited a specific key press sequence of either the left or right index finger. In the other trials regularly spelt target words elicited the same key press sequence. An auditory memory load was added in half the trials by asking participants to remember the pitch of a tone during task performance. Orthographical irregularity was expected to slow down performance. Auditory memory load, viewed as a low level stressor, was expected to affect performance only when orthographically irregular words needed to be typed. The hypotheses were confirmed. Additional analysis showed differential effects on the left and right hand, possibly related to verbal-manual interference and hand dominance. The results are discussed in relation to relevant findings of recent neuroimaging studies.
Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H
2018-05-02
A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.
30 CFR 250.124 - Will MMS approve gas injection into the cap rock containing a sulphur deposit?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Will MMS approve gas injection into the cap... SHELF General Performance Standards § 250.124 Will MMS approve gas injection into the cap rock containing a sulphur deposit? To receive the Regional Supervisor's approval to inject gas into the cap rock...
Stojmenova, Kristina; Sodnik, Jaka
2018-07-04
There are 3 standardized versions of the Detection Response Task (DRT), 2 using visual stimuli (remote DRT and head-mounted DRT) and one using tactile stimuli. In this article, we present a study that proposes and validates a type of auditory signal to be used as DRT stimulus and evaluate the proposed auditory version of this method by comparing it with the standardized visual and tactile version. This was a within-subject design study performed in a driving simulator with 24 participants. Each participant performed 8 2-min-long driving sessions in which they had to perform 3 different tasks: driving, answering to DRT stimuli, and performing a cognitive task (n-back task). Presence of additional cognitive load and type of DRT stimuli were defined as independent variables. DRT response times and hit rates, n-back task performance, and pupil size were observed as dependent variables. Significant changes in pupil size for trials with a cognitive task compared to trials without showed that cognitive load was induced properly. Each DRT version showed a significant increase in response times and a decrease in hit rates for trials with a secondary cognitive task compared to trials without. Similar and significantly better results in differences in response times and hit rates were obtained for the auditory and tactile version compared to the visual version. There were no significant differences in performance rate between the trials without DRT stimuli compared to trials with and among the trials with different DRT stimuli modalities. The results from this study show that the auditory DRT version, using the signal implementation suggested in this article, is sensitive to the effects of cognitive load on driver's attention and is significantly better than the remote visual and tactile version for auditory-vocal cognitive (n-back) secondary tasks.
[Auditory training in workshops: group therapy option].
Santos, Juliana Nunes; do Couto, Isabel Cristina Plais; Amorim, Raquel Martins da Costa
2006-01-01
auditory training in groups. to verify in a group of individuals with mental retardation the efficacy of auditory training in a workshop environment. METHOD a longitudinal prospective study with 13 mentally retarded individuals from the Associação de Pais e Amigos do Excepcional (APAE) of Congonhas divided in two groups: case (n=5) and control (n=8) and who were submitted to ten auditory training sessions after verifying the integrity of the peripheral auditory system through evoked otoacoustic emissions. Participants were evaluated using a specific protocol concerning the auditory abilities (sound localization, auditory identification, memory, sequencing, auditory discrimination and auditory comprehension) at the beginning and at the end of the project. Data (entering, processing and analyses) were analyzed by the Epi Info 6.04 software. the groups did not differ regarding aspects of age (mean = 23.6 years) and gender (40% male). In the first evaluation both groups presented similar performances. In the final evaluation an improvement in the auditory abilities was observed for the individuals in the case group. When comparing the mean number of correct answers obtained by both groups in the first and final evaluations, a statistically significant result was obtained for sound localization (p=0.02), auditory sequencing (p=0.006) and auditory discrimination (p=0.03). group auditory training demonstrated to be effective in individuals with mental retardation, observing an improvement in the auditory abilities. More studies, with a larger number of participants, are necessary in order to confirm the findings of the present research. These results will help public health professionals to reanalyze the theory models used for therapy, so that they can use specific methods according to individual needs, such as auditory training workshops.
Using multisensory cues to facilitate air traffic management.
Ngo, Mary K; Pierce, Russell S; Spence, Charles
2012-12-01
In the present study, we sought to investigate whether auditory and tactile cuing could be used to facilitate a complex, real-world air traffic management scenario. Auditory and tactile cuing provides an effective means of improving both the speed and accuracy of participants' performance in a variety of laboratory-based visual target detection and identification tasks. A low-fidelity air traffic simulation task was used in which participants monitored and controlled aircraft.The participants had to ensure that the aircraft landed or exited at the correct altitude, speed, and direction and that they maintained a safe separation from all other aircraft and boundaries. The performance measures recorded included en route time, handoff delay, and conflict resolution delay (the performance measure of interest). In a baseline condition, the aircraft in conflict was highlighted in red (visual cue), and in the experimental conditions, this standard visual cue was accompanied by a simultaneously presented auditory, vibrotactile, or audiotactile cue. Participants responded significantly more rapidly, but no less accurately, to conflicts when presented with an additional auditory or audiotactile cue than with either a vibrotactile or visual cue alone. Auditory and audiotactile cues have the potential for improving operator performance by reducing the time it takes to detect and respond to potential visual target events. These results have important implications for the design and use of multisensory cues in air traffic management.
Thermographic Analysis of Composite Cobonds on the X-33
NASA Technical Reports Server (NTRS)
Russell, S. S.; Walker, J. L.; Lansing, M. D.
2001-01-01
During the manufacture of the X-33 liquid hydrogen (LH2) Tank 2, a total of 36 reinforcing caps were inspected thermographically. The cured reinforcing sheets of graphite/epoxy were bonded to the tank using a wet cobond process with vacuum bagging and low temperature curing. A foam filler material wedge separated the reinforcing caps from the outer skin of the tank. Manufacturing difficulties caused by a combination of the size of the reinforcing caps and their complex geometry lead to a potential for trapping air in the bond line. An inspection process was desired to ensure that the bond line was free of voids before it had cured so that measures could be taken to rub out the entrapped air or remove the cap and perform additional surface matching. Infrared thermography was used to perform the procure 'wet bond' inspection as well a to document the final 'cured' condition of the caps. The thermal map of the bond line was acquired by heating the cap with either a flash lamp or a set of high intensity quartz lamps and then viewing it during cool down. The inspections were performed through the vacuum bag and voids were characterized by localized hot spots. In order to ensure that the cap had bonded to the tank properly, a post cure 'flash heating' thermographic investigation was performed with the vacuum bag removed. Any regions that had opened up after the preliminary inspection or that were hidden during the bagging operation were marked and filled by drilling small holes in the cap and injecting resin. This process was repeated until all critical sized voids were filled.
Thermographic Analysis of Composite Cobonds on the X-33
NASA Technical Reports Server (NTRS)
Russell, Samuel S.; Walker, James L.; Lansing, Matthew D.; Whitaker, Ann F. (Technical Monitor)
2000-01-01
During the manufacture of the X-33 liquid hydrogen (LH2) Tank 2, a total of thirty-six reinforcing caps were inspected thermographically. The cured reinforcing sheets of graphite/epoxy were bonded to the tank using a wet cobond process with vacuum bagging and low temperature curing. A foam filler material wedge separated the reinforcing caps from the outer skin of the tank. Manufacturing difficulties caused by a combination of the size of the reinforcing caps and their complex geometry lead to a potential for trapping air in the bond line. An inspection process was desired to ensure that the bond line was free of voids before it had cured so that measures could be taken to rub out the entrapped air or remove the cap and perform additional surface matching. Infrared thermography was used to perform the precure "wet bond" inspection as well as to document the final "cured" condition of the caps. The thermal map of the bond line was acquired by heating the cap with either a flash lamp or a set of high intensity quartz lamps and then viewing it during cool down. The inspections were performed through the vacuum bag and voids were characterized by localized hot spots. In order to ensure that the cap had bonded to the tank properly, a post cure "flash heating" thermographic investigation was performed with the vacuum bag removed. Any regions that had opened up after the preliminary inspection or that were hidden during the bagging operation were marked and filled by drilling small holes in the cap and injecting resin. This process was repeated until all critical sized voids were filled.
1999 commuter assistance program evaluation manual
DOT National Transportation Integrated Search
2001-01-01
This manual was developed to assist Florida's Commuter Assistance Programs (CAP) to measure and evaluate their performance. It provides information necessary for a CAP to create and implement its own evaluation program. It discusses performance measu...
The role of Broca's area in speech perception: evidence from aphasia revisited.
Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele
2011-12-01
Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca's aphasia, and therefore inferred damage to Broca's area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca's area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d', and was unrelated to the degree of non-fluency in the patients' speech production. Performance on the auditory-visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory-visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory-visual matching of phonological forms. 2011 Elsevier Inc. All rights reserved.
Neural correlates of auditory recognition memory in the primate dorsal temporal pole
Ng, Chi-Wing; Plakke, Bethany
2013-01-01
Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects. PMID:24198324
ERIC Educational Resources Information Center
Bolen, L. M.; Kimball, D. J.; Hall, C. W.; Webster, R. E.
1997-01-01
Compares the visual and auditory processing factors of the Woodcock Johnson Tests of Cognitive Ability, Revised (WJR COG) and the visual and auditory memory factors of the Learning Efficiency Test, II (LET-II) among 120 college students. Results indicate two significant performance differences between the WJR COG and LET-II. (RJM)
Selective Attention to Auditory Memory Neurally Enhances Perceptual Precision.
Lim, Sung-Joo; Wöstmann, Malte; Obleser, Jonas
2015-12-09
Selective attention to a task-relevant stimulus facilitates encoding of that stimulus into a working memory representation. It is less clear whether selective attention also improves the precision of a stimulus already represented in memory. Here, we investigate the behavioral and neural dynamics of selective attention to representations in auditory working memory (i.e., auditory objects) using psychophysical modeling and model-based analysis of electroencephalographic signals. Human listeners performed a syllable pitch discrimination task where two syllables served as to-be-encoded auditory objects. Valid (vs neutral) retroactive cues were presented during retention to allow listeners to selectively attend to the to-be-probed auditory object in memory. Behaviorally, listeners represented auditory objects in memory more precisely (expressed by steeper slopes of a psychometric curve) and made faster perceptual decisions when valid compared to neutral retrocues were presented. Neurally, valid compared to neutral retrocues elicited a larger frontocentral sustained negativity in the evoked potential as well as enhanced parietal alpha/low-beta oscillatory power (9-18 Hz) during memory retention. Critically, individual magnitudes of alpha oscillatory power (7-11 Hz) modulation predicted the degree to which valid retrocues benefitted individuals' behavior. Our results indicate that selective attention to a specific object in auditory memory does benefit human performance not by simply reducing memory load, but by actively engaging complementary neural resources to sharpen the precision of the task-relevant object in memory. Can selective attention improve the representational precision with which objects are held in memory? And if so, what are the neural mechanisms that support such improvement? These issues have been rarely examined within the auditory modality, in which acoustic signals change and vanish on a milliseconds time scale. Introducing a new auditory memory paradigm and using model-based electroencephalography analyses in humans, we thus bridge this gap and reveal behavioral and neural signatures of increased, attention-mediated working memory precision. We further show that the extent of alpha power modulation predicts the degree to which individuals' memory performance benefits from selective attention. Copyright © 2015 the authors 0270-6474/15/3516094-11$15.00/0.
Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou
2018-01-01
Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.
Bratzke, Daniel; Seifried, Tanja; Ulrich, Rolf
2012-08-01
This study assessed possible cross-modal transfer effects of training in a temporal discrimination task from vision to audition as well as from audition to vision. We employed a pretest-training-post-test design including a control group that performed only the pretest and the post-test. Trained participants showed better discrimination performance with their trained interval than the control group. This training effect transferred to the other modality only for those participants who had been trained with auditory stimuli. The present study thus demonstrates for the first time that training on temporal discrimination within the auditory modality can transfer to the visual modality but not vice versa. This finding represents a novel illustration of auditory dominance in temporal processing and is consistent with the notion that time is primarily encoded in the auditory system.
Differential coding of conspecific vocalizations in the ventral auditory cortical stream.
Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B
2014-03-26
The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.
Differential Coding of Conspecific Vocalizations in the Ventral Auditory Cortical Stream
Saunders, Richard C.; Leopold, David A.; Mishkin, Mortimer; Averbeck, Bruno B.
2014-01-01
The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012
Comprehensive evaluation of a child with an auditory brainstem implant.
Eisenberg, Laurie S; Johnson, Karen C; Martinez, Amy S; DesJardin, Jean L; Stika, Carren J; Dzubak, Danielle; Mahalak, Mandy Lutz; Rector, Emily P
2008-02-01
We had an opportunity to evaluate an American child whose family traveled to Italy to receive an auditory brainstem implant (ABI). The goal of this evaluation was to obtain insight into possible benefits derived from the ABI and to begin developing assessment protocols for pediatric clinical trials. Case study. Tertiary referral center. Pediatric ABI Patient 1 was born with auditory nerve agenesis. Auditory brainstem implant surgery was performed in December, 2005, in Verona, Italy. The child was assessed at the House Ear Institute, Los Angeles, in July 2006 at the age of 3 years 11 months. Follow-up assessment has continued at the HEAR Center in Birmingham, Alabama. Auditory brainstem implant. Performance was assessed for the domains of audition, speech and language, intelligence and behavior, quality of life, and parental factors. Patient 1 demonstrated detection of sound, speech pattern perception with visual cues, and inconsistent auditory-only vowel discrimination. Language age with signs was approximately 2 years, and vocalizations were increasing. Of normal intelligence, he exhibited attention deficits with difficulty completing structured tasks. Twelve months later, this child was able to identify speech patterns consistently; closed-set word identification was emerging. These results were within the range of performance for a small sample of similarly aged pediatric cochlear implant users. Pediatric ABI assessment with a group of well-selected children is needed to examine risk versus benefit in this population and to analyze whether open-set speech recognition is achievable.
Tapper, Anthony; Gonzalez, Dave; Roy, Eric; Niechwiej-Szwedo, Ewa
2017-02-01
The purpose of this study was to examine executive functions in team sport athletes with and without a history of concussion. Executive functions comprise many cognitive processes including, working memory, attention and multi-tasking. Past research has shown that concussions cause difficulties in vestibular-visual and vestibular-auditory dual-tasking, however, visual-auditory tasks have been examined rarely. Twenty-nine intercollegiate varsity ice hockey athletes (age = 19.13, SD = 1.56; 15 females) performed an experimental dual-task paradigm that required simultaneously processing visual and auditory information. A brief interview, event description and self-report questionnaires were used to assign participants to each group (concussion, no-concussion). Eighteen athletes had a history of concussion and 11 had no concussion history. The two tests involved visuospatial working memory (i.e., Corsi block test) and auditory tone discrimination. Participants completed both tasks individually, then simultaneously. Two outcome variables were measured, Corsi block memory span and auditory tone discrimination accuracy. No differences were shown when each task was performed alone; however, athletes with a history of concussion had a significantly worse performance on the tone discrimination task in the dual-task condition. In conclusion, long-term deficits in executive functions were associated with a prior history of concussion when cognitive resources were stressed. Evaluations of executive functions and divided attention appear to be helpful in discriminating participants with and without a history concussion.
Reproduction of auditory and visual standards in monochannel cochlear implant users.
Kanabus, Magdalena; Szelag, Elzbieta; Kolodziejczyk, Iwona; Szuchnik, Joanna
2004-01-01
The temporal reproduction of standard durations ranging from 1 to 9 seconds was investigated in monochannel cochlear implant (CI) users and in normally hearing subjects for the auditory and visual modality. The results showed that the pattern of performance in patients depended on their level of auditory comprehension. Results for CI users, who displayed relatively good auditory comprehension, did not differ from that of normally hearing subjects for both modalities. Patients with poor auditory comprehension significantly overestimated shorter auditory standards (1, 1.5 and 2.5 s), compared to both patients with good comprehension and controls. For the visual modality the between-group comparisons were not significant. These deficits in the reproduction of auditory standards were explained in accordance with both the attentional-gate model and the role of working memory in prospective time judgment. The impairments described above can influence the functioning of the temporal integration mechanism that is crucial for auditory speech comprehension on the level of words and phrases. We postulate that the deficits in time reproduction of short standards may be one of the possible reasons for poor speech understanding in monochannel CI users.
Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi
2017-10-01
Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.
Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies
2015-12-01
Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. Copyright © 2015 Elsevier Ltd. All rights reserved.
Revisiting the "enigma" of musicians with dyslexia: Auditory sequencing and speech abilities.
Zuk, Jennifer; Bishop-Liebler, Paula; Ozernov-Palchik, Ola; Moore, Emma; Overy, Katie; Welch, Graham; Gaab, Nadine
2017-04-01
Previous research has suggested a link between musical training and auditory processing skills. Musicians have shown enhanced perception of auditory features critical to both music and speech, suggesting that this link extends beyond basic auditory processing. It remains unclear to what extent musicians who also have dyslexia show these specialized abilities, considering often-observed persistent deficits that coincide with reading impairments. The present study evaluated auditory sequencing and speech discrimination in 52 adults comprised of musicians with dyslexia, nonmusicians with dyslexia, and typical musicians. An auditory sequencing task measuring perceptual acuity for tone sequences of increasing length was administered. Furthermore, subjects were asked to discriminate synthesized syllable continua varying in acoustic components of speech necessary for intraphonemic discrimination, which included spectral (formant frequency) and temporal (voice onset time [VOT] and amplitude envelope) features. Results indicate that musicians with dyslexia did not significantly differ from typical musicians and performed better than nonmusicians with dyslexia for auditory sequencing as well as discrimination of spectral and VOT cues within syllable continua. However, typical musicians demonstrated superior performance relative to both groups with dyslexia for discrimination of syllables varying in amplitude information. These findings suggest a distinct profile of speech processing abilities in musicians with dyslexia, with specific weaknesses in discerning amplitude cues within speech. Because these difficulties seem to remain persistent in adults with dyslexia despite musical training, this study only partly supports the potential for musical training to enhance the auditory processing skills known to be crucial for literacy in individuals with dyslexia. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Liu, Wen-Long; Zhao, Xu; Tan, Jian-Hui; Wang, Juan
2014-09-01
To explore the attention characteristics of children with different clinical subtypes of attention deficit hyperactivity disorder (ADHD) and to provide a basis for clinical intervention. A total of 345 children diagnosed with ADHD were selected and the subtypes were identified. Attention assessment was performed by the intermediate visual and auditory continuous performance test at diagnosis, and the visual and auditory attention characteristics were compared between children with different subtypes. A total of 122 normal children were recruited in the control group and their attention characteristics were compared with those of children with ADHD. The scores of full scale attention quotient (AQ) and full scale response control quotient (RCQ) of children with all three subtypes of ADHD were significantly lower than those of normal children (P<0.01). The score of auditory RCQ was significantly lower than that of visual RCQ in children with ADHD-hyperactive/impulsive subtype (P<0.05). The scores of auditory AQ and speed quotient (SQ) were significantly higher than those of visual AQ and SQ in three subtypes of ADHD children (P<0.01), while the score of visual precaution quotient (PQ) was significantly higher than that of auditory PQ (P<0.01). No significant differences in auditory or visual AQ were observed between the three subtypes of ADHD. The attention function of children with ADHD is worse than that of normal children, and the impairment of visual attention function is severer than that of auditory attention function. The degree of functional impairment of visual or auditory attention shows no significant differences between three subtypes of ADHD.
Relation between brain activation and lexical performance.
Booth, James R; Burman, Douglas D; Meyer, Joel R; Gitelman, Darren R; Parrish, Todd B; Mesulam, M Marsel
2003-07-01
Functional magnetic resonance imaging (fMRI) was used to determine whether performance on lexical tasks was correlated with cerebral activation patterns. We found that such relationships did exist and that their anatomical distribution reflected the neurocognitive processing routes required by the task. Better performance on intramodal tasks (determining if visual words were spelled the same or if auditory words rhymed) was correlated with more activation in unimodal regions corresponding to the modality of sensory input, namely the fusiform gyrus (BA 37) for written words and the superior temporal gyrus (BA 22) for spoken words. Better performance in tasks requiring cross-modal conversions (determining if auditory words were spelled the same or if visual words rhymed), on the other hand, was correlated with more activation in posterior heteromodal regions, including the supramarginal gyrus (BA 40) and the angular gyrus (BA 39). Better performance in these cross-modal tasks was also correlated with greater activation in unimodal regions corresponding to the target modality of the conversion process (i.e., fusiform gyrus for auditory spelling and superior temporal gyrus for visual rhyming). In contrast, performance on the auditory spelling task was inversely correlated with activation in the superior temporal gyrus possibly reflecting a greater emphasis on the properties of the perceptual input rather than on the relevant transmodal conversions. Copyright 2003 Wiley-Liss, Inc.
ERIC Educational Resources Information Center
Billiet, Cassandra R.; Bellis, Teri James
2011-01-01
Purpose: Studies using speech stimuli to elicit electrophysiologic responses have found approximately 30% of children with language-based learning problems demonstrate abnormal brainstem timing. Research is needed regarding how these responses relate to performance on behavioral tests of central auditory function. The purpose of the study was to…
The Relationship between Auditory Temporal Processing, Phonemic Awareness, and Reading Disability.
ERIC Educational Resources Information Center
Bretherton, Lesley; Holmes, V. M.
2003-01-01
Investigated the relationship between auditory temporal processing of nonspeech sounds and phonological awareness ability in 8- to 12-year-olds with a reading disability, placed in groups based on performance on Tallal's tone-order judgment task. Found that a tone-order deficit did not relate to performance on order processing of speech sounds, to…
Ostrand, Rachel; Blumstein, Sheila E.; Ferreira, Victor S.; Morgan, James L.
2016-01-01
Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another. PMID:27011021
Phosphogypsum capping depth affects revegetation and hydrology in Western Canada.
Jackson, Mallory E; Naeth, M Anne; Chanasyk, David S; Nichol, Connie K
2011-01-01
Phosphogypsum (PG), a byproduct of phosphate fertilizer manufacturing, is commonly stacked and capped with soil at decommissioning. Shallow (0, 8, 15, and 30 cm) and thick (46 and 91 cm) sandy loam caps on a PG stack near Fort Saskatchewan, Alberta, Canada, were studied in relation to vegetation establishment and hydrologic properties. Plant response was evaluated over two growing seasons for redtop ( L.), slender wheatgrass ( (Link) Malte ex H.F. Lewis), tufted hairgrass ( (L.) P. Beauv.), and sheep fescue ( L.) and for a mix of these grasses with alsike clover ( L.). Water content below the soil-PG interface was monitored with time-domain reflectometry probes, and leachate water quantity and quality at a depth of 30 cm was measured using lysimeters. Vegetation responded positively to all cap depths relative to bare PG, with few significant differences among cap depths. Slender wheatgrass performed best, and tufted hairgrass performed poorly. Soil caps <1 m required by regulation were sufficient for early revegetation. Soil water fluctuated more in shallow than in thick caps, and water content was generally between field capacity and wilting point regardless of cap depth. Water quality was not affected by cap depths ≤30 cm. Leachate volumes at 30 cm from distinct rainfall events were independent of precipitation amount and cap depth. The study period had lower precipitation than normal, yet soil caps were hospitable for plant growth in the first 2 yr of establishment. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Hughes, Michelle L; Choi, Sangsook; Glickman, Erin
2018-03-01
Modeling studies suggest that differences in neural responses between polarities might reflect underlying neural health. Specifically, large differences in electrically evoked compound action potential (eCAP) amplitudes and amplitude-growth-function (AGF) slopes between polarities might reflect poorer peripheral neural health, whereas more similar eCAP responses between polarities might reflect better neural health. The interphase gap (IPG) has also been shown to relate to neural survival in animal studies. Specifically, healthy neurons exhibit larger eCAP amplitudes, lower thresholds, and steeper AGF slopes for increasing IPGs. In ears with poorer neural survival, these changes in neural responses are generally less apparent with increasing IPG. The primary goal of this study was to examine the combined effects of stimulus polarity and IPG within and across subjects to determine whether both measures represent similar underlying mechanisms related to neural health. With the exception of one measure in one group of subjects, results showed that polarity and IPG effects were generally not correlated in a systematic or predictable way. This suggests that these two effects might represent somewhat different aspects of neural health, such as differences in site of excitation versus integrative membrane characteristics, for example. Overall, the results from this study suggest that the underlying mechanisms that contribute to polarity and IPG effects in human CI recipients might be difficult to determine from animal models that do not exhibit the same anatomy, variance in etiology, electrode placement, and duration of deafness as humans. Copyright © 2017 Elsevier B.V. All rights reserved.
Effects of speech intelligibility level on concurrent visual task performance.
Payne, D G; Peters, L J; Birkmire, D P; Bonto, M A; Anastasi, J S; Wenger, M J
1994-09-01
Four experiments were performed to determine if changes in the level of speech intelligibility in an auditory task have an impact on performance in concurrent visual tasks. The auditory task used in each experiment was a memory search task in which subjects memorized a set of words and then decided whether auditorily presented probe items were members of the memorized set. The visual tasks used were an unstable tracking task, a spatial decision-making task, a mathematical reasoning task, and a probability monitoring task. Results showed that performance on the unstable tracking and probability monitoring tasks was unaffected by the level of speech intelligibility on the auditory task, whereas accuracy in the spatial decision-making and mathematical processing tasks was significantly worse at low speech intelligibility levels. The findings are interpreted within the framework of multiple resource theory.
Mischlinger, Johannes; Pitzinger, Paul; Veletzky, Luzia; Groger, Mirjam; Zoleko-Manego, Rella; Adegnika, Ayola A; Agnandji, Selidji T; Lell, Bertrand; Kremsner, Peter G; Tannich, Egbert; Mombo-Ngoma, Ghyslain; Mordmüller, Benjamin; Ramharter, Michael
2018-05-25
Diagnosis of malaria is usually based on samples of peripheral blood. However, it is unclear whether capillary (CAP) or venous (VEN) blood samples provide better diagnostic performance. Quantitative differences of parasitemia between CAP and VEN blood and diagnostic performance characteristics were investigated. Patients were recruited between September 2015 and February 2016 in Gabon. Light microscopy and qPCR quantified parasitemia of paired CAP and VEN samples, whose preparation followed the exact same methodology. CAP and VEN performance characteristics using microscopy were evaluated against a qPCR gold-standard. Microscopy revealed a median (IQR) parasites/L of 495 (853,243) in CAP and 429 (524,074) in VEN samples manifesting in a +16.6% (p=0.04) higher CAPparasitemia compared with VENparasitemia. Concordantly, qPCR demonstrated that -0.278 (p=0.006) cycles were required for signal detection in CAP samples. CAPsensitivity of microscopy relative to the gold-standard was 81.5% (77.485.6%) versus VENsensitivity of 73.4% (68.878.1%), while CAPspecificity and VENspecificity were 91%. CAPsensitivity and VENsensitivity dropped to 63.3% and 45.9%, respectively for a sub-population of low-level parasitemias while specificities were 92%. CAP sampling leads to higher parasitemias compared to VEN sampling and improves diagnostic sensitivity. These findings may have important implications for routine diagnostics, research and elimination campaigns of malaria.
Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool
NASA Astrophysics Data System (ADS)
Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.
1997-12-01
Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.
COMBINING RATE-BASED AND CAP-AND-TRADE EMISSIONS POLICIES. (R828628)
Rate-based emissions policies (like tradable performance standards, TPS) fix average emissions intensity, while cap-and-trade (CAT) policies fix total emissions. This paper shows that unfettered trade between rate-based and cap-and-trade programs always raises combined emissio...
Acute Inactivation of Primary Auditory Cortex Causes a Sound Localisation Deficit in Ferrets
Wood, Katherine C.; Town, Stephen M.; Atilgan, Huriye; Jones, Gareth P.
2017-01-01
The objective of this study was to demonstrate the efficacy of acute inactivation of brain areas by cooling in the behaving ferret and to demonstrate that cooling auditory cortex produced a localisation deficit that was specific to auditory stimuli. The effect of cooling on neural activity was measured in anesthetized ferret cortex. The behavioural effect of cooling was determined in a benchmark sound localisation task in which inactivation of primary auditory cortex (A1) is known to impair performance. Cooling strongly suppressed the spontaneous and stimulus-evoked firing rates of cortical neurons when the cooling loop was held at temperatures below 10°C, and this suppression was reversed when the cortical temperature recovered. Cooling of ferret auditory cortex during behavioural testing impaired sound localisation performance, with unilateral cooling producing selective deficits in the hemifield contralateral to cooling, and bilateral cooling producing deficits on both sides of space. The deficit in sound localisation induced by inactivation of A1 was not caused by motivational or locomotor changes since inactivation of A1 did not affect localisation of visual stimuli in the same context. PMID:28099489
Psycho acoustical Measures in Individuals with Congenital Visual Impairment.
Kumar, Kaushlendra; Thomas, Teenu; Bhat, Jayashree S; Ranjan, Rajesh
2017-12-01
In congenital visual impaired individuals one modality is impaired (visual modality) this impairment is compensated by other sensory modalities. There is evidence that visual impaired performed better in different auditory task like localization, auditory memory, verbal memory, auditory attention, and other behavioural tasks when compare to normal sighted individuals. The current study was aimed to compare the temporal resolution, frequency resolution and speech perception in noise ability in individuals with congenital visual impaired and normal sighted. Temporal resolution, frequency resolution, and speech perception in noise were measured using MDT, GDT, DDT, SRDT, and SNR50 respectively. Twelve congenital visual impaired participants with age range of 18 to 40 years were taken and equal in number with normal sighted participants. All the participants had normal hearing sensitivity with normal middle ear functioning. Individual with visual impairment showed superior threshold in MDT, SRDT and SNR50 as compared to normal sighted individuals. This may be due to complexity of the tasks; MDT, SRDT and SNR50 are complex tasks than GDT and DDT. Visual impairment showed superior performance in auditory processing and speech perception with complex auditory perceptual tasks.
Kodak, Tiffany; Clements, Andrea; Paden, Amber R; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A
2015-01-01
The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The results of the skills assessment showed that 4 participants failed to demonstrate mastery of at least 1 of the skills. We compared the outcomes of the assessment to the results of auditory-visual conditional discrimination training and found that training outcomes were related to the assessment outcomes for 7 of the 9 participants. One participant who did not demonstrate mastery of all assessment skills subsequently learned several conditional discriminations when blocked training trials were conducted. Another participant who did not demonstrate mastery of the auditory discrimination skill subsequently acquired conditional discriminations in 1 of the training conditions. We discuss the implications of the assessment for practice and suggest additional areas of research on this topic. © Society for the Experimental Analysis of Behavior.
Detection of nitrogen dioxide by CW cavity-enhanced spectroscopy
NASA Astrophysics Data System (ADS)
Jie, Guo; Han, Ye-Xing; Yu, Zhi-Wei; Tang, Huai-Wu
2016-11-01
In the paper, an accurate and sensitive system was used to monitor the ambient atmospheric NO2 concentrations. This system utilizes cavity attenuated phase shift spectroscopy(CAPS), a technology related to cavity ring down spectroscopy(CRDS). Advantages of the CAPS system include such as: (1) cheap and easy to control the light source, (2) high accuracy, and (3) low detection limit. The performance of the CAPS system was evaluated by measuring of the stability and response of the system. The minima ( 0.08 ppb NO2) in the Allan plots show the optimum average time( 100s) for optimum detection performance of the CAPS system. Over a 20-day-long period of the ambient atmospheric NO2 concentrations monitoring, a comparison of the CAPS system with an extremely accurate and precise chemiluminescence-based NOx analyzer showed that the CAPS system was able to reliably and quantitatively measure both large and small fluctuations in the ambient nitrogen dioxide concentration. The experimental results show that the measuring instrument results correlation is 0.95.
Yadav, Saurabh K; Agrawal, Bharati; Chandra, Pranjal; Goyal, Rajendra N
2014-05-15
A sensitive and selective electrochemical biosensor is developed for the determination of chloramphenicol (CAP) exploring its direct electron transfer processes in in-vitro model and pharmaceutical samples. This biosensor exploits a selective binding of CAP with aptamer, immobilized onto the poly-(4-amino-3-hydroxynapthalene sulfonic acid) (p-AHNSA) modified edge plane pyrolytic graphite. The electrochemical reduction of CAP was observed in a well-defined peak. A quartz crystal microbalance (QCM) study is performed to confirm the interaction between the polymer film and the aptamer. Cyclic voltammetry (CV) and square wave voltammetry (SWV) were used to detect CAP. The in-vitro CAP detection is performed using the bacterial strain of Haemophilus influenza. A significant accumulation of CAP by the drug sensitive H. influenza strain is observed for the first time in this study using a biosensor. Various parameters affecting the CAP detection in standard solution and in in vitro detection are optimized. The detection of CAP is linear in the range of 0.1-2500 nM with the detection limit and sensitivity of 0.02 nM and 0.102 µA/nM, respectively. CAP is also detected in the presence of other common antibiotics and proteins present in the real sample matrix, and negligible interference is observed. Copyright © 2013 Elsevier B.V. All rights reserved.
Engineering Data Compendium. Human Perception and Performance. Volume 2
1988-01-01
Stimulation 5.1014 5.1004 Auditory Detection in the Presence of Visual Stimulation 5.1015 5.1005 Tactual Detection and Discrimination in the Presence of...Accessory Stimulation 5.1016 5.1006 Tactile Versus Auditory Localization of Sound 5.1007 Spatial Localization in the Presence of Inter- 5.1017...York: Wiley. Cross References 5.1004 Auditory detection in the presence of visual stimulation ; 5.1005 Tactual detection and dis- crimination in
ERIC Educational Resources Information Center
Zelanti, Pierre S.; Droit-Volet, Sylvie
2012-01-01
Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…
Auditory Pitch Perception in Autism Spectrum Disorder Is Associated With Nonverbal Abilities.
Chowdhury, Rakhee; Sharda, Megha; Foster, Nicholas E V; Germain, Esther; Tryfon, Ana; Doyle-Thomas, Krissy; Anagnostou, Evdokia; Hyde, Krista L
2017-11-01
Atypical sensory perception and heterogeneous cognitive profiles are common features of autism spectrum disorder (ASD). However, previous findings on auditory sensory processing in ASD are mixed. Accordingly, auditory perception and its relation to cognitive abilities in ASD remain poorly understood. Here, children with ASD, and age- and intelligence quotient (IQ)-matched typically developing children, were tested on a low- and a higher level pitch processing task. Verbal and nonverbal cognitive abilities were measured using the Wechsler's Abbreviated Scale of Intelligence. There were no group differences in performance on either auditory task or IQ measure. However, there was significant variability in performance on the auditory tasks in both groups that was predicted by nonverbal, not verbal skills. These results suggest that auditory perception is related to nonverbal reasoning rather than verbal abilities in ASD and typically developing children. In addition, these findings provide evidence for preserved pitch processing in school-age children with ASD with average IQ, supporting the idea that there may be a subgroup of individuals with ASD that do not present perceptual or cognitive difficulties. Future directions involve examining whether similar perceptual-cognitive relationships might be observed in a broader sample of individuals with ASD, such as those with language impairment or lower IQ.
Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.
de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo
2016-10-01
Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.
Griffiths, Rebecca L M; El-Shanawany, Tariq; Jolles, Stephen R A; Selwood, Clive; Heaps, Adrian G; Carne, Emily M; Williams, Paul E
2017-01-01
Allergy is diagnosed from typical symptoms, and tests are performed to incriminate the suspected precipitant. Skin prick tests (SPTs) are commonly performed, inexpensive, and give immediate results. Laboratory tests (ImmunoCAP) for serum allergen-specific IgE antibodies are usually performed more selectively. The immuno-solid phase allergen chip (ISAC) enables testing for specific IgE against multiple allergen components in a multiplex assay. We retrospectively analysed clinic letters, case notes, and laboratory results of 118 patients attending the National Adult Allergy Service at the University Hospital of Wales who presented diagnostic difficulty, to evaluate which testing strategy (SPT, ImmunoCAP, or ISAC) was the most appropriate to use to confirm the diagnosis in these complex patients, evaluated in a "real-life" clinical service setting. In patients with nut allergy, the detection rates of SPTs (56%) and ISAC (65%) were lower than those of ImmunoCAP (71%). ISAC had a higher detection rate (88%) than ImmunoCAP (69%) or SPT (33%) in the diagnosis of oral allergy syndrome. ImmunoCAP test results identified all 9 patients with anaphylaxis due to wheat allergy (100%), whereas ISAC was positive in only 6 of these 9 (67%). In this difficult diagnostic group, the ImmunoCAP test should be the preferred single test for possible allergy to nuts, wheat, other specific foods, and anaphylaxis of any cause. In these conditions, SPT and ISAC tests give comparable results. The most useful single test for oral allergy syndrome is ISAC, and SPT should be the preferred test for latex allergy. © 2017 S. Karger AG, Basel.
Lin, Hung-Yu; Hsieh, Hsieh-Chun; Lee, Posen; Hong, Fu-Yuan; Chang, Wen-Dien; Liu, Kuo-Cheng
2017-08-01
This study explored auditory and visual attention in children with ADHD. In a randomized, two-period crossover design, 50 children with ADHD and 50 age- and sex-matched typically developing peers were measured with the Test of Various Attention (TOVA). The deficiency of visual attention is more serious than that of auditory attention in children with ADHD. On the auditory modality, only the deficit of attentional inconsistency is sufficient to explain most cases of ADHD; however, most of the children with ADHD suffered from deficits of sustained attention, response inhibition, and attentional inconsistency on the visual modality. Our results also showed that the deficit of attentional inconsistency is the most important indicator in diagnosing and intervening in ADHD when both auditory and visual modalities are considered. The findings provide strong evidence that the deficits of auditory attention are different from those of visual attention in children with ADHD.
Challenges of recording human fetal auditory-evoked response using magnetoencephalography.
Eswaran, H; Lowery, C L; Robinson, S E; Wilson, J D; Cheyne, D; McKenzie, D
2000-01-01
Our goals were to successfully perform fetal auditory-evoked responses using the magnetoencephalography technique, understand its problems and limitations, and propose instrument design modifications to improve the signal quality and success rate. Fetal auditory-evoked responses were recorded from four fetuses with gestational ages ranging from 33-40+ weeks. The signals were recorded using a gantry-based superconducting quantum interference device. Auditory stimulus was 1 kHz tone burst. The evoked signals were digitized and averaged over an 800 ms window. After several trials of positioning and repositioning the subjects, we were able to record auditory-evoked responses in three out of the four fetuses. Since the superconducting quantum interference device array design was not shaped to fit over the mother's abdomen, we experienced difficulty in positioning the sensors over the fetal head. Based on this pilot study, we propose instrument design that may improve signal quality and success rate of the fetal magnetic auditory-evoked response.
Trudel, Mathieu; Côté, Mathieu; Philippon, Daniel; Simonyan, David; Villemure-Poliquin, Noémie; Bussières, Richard
2018-07-01
To compare scala vestibuli versus scala tympani cochlear implantation in terms of postoperative auditory performances and programming parameters in patients with severe scala tympani ossification. Retrospective case-control study. Tertiary referral center. One hundred three pediatric and adult patients who underwent cochlear implant surgery between 2000 and 2016. Three groups were formed: a scala vestibuli group, a scala tympani with ossification group, and a scala tympani without ossification group. Patients were matched based on their age, sex, duration of deafness, and side of implantation (ratio of 1:2:2). Postoperative evaluation of auditory performances and programming parameters following intensive functional rehabilitation program completion. Multimedia adaptive test (MAT), hearing in noise test (HINT SNR +10 dB, HINT SNR +5 dB, and HINT SNR +0 dB), impedances, neural response telemetry thresholds (NRT), neural response imaging thresholds (NRI), comfortable levels (C-levels), and threshold levels (T-levels) were compared between groups. Twenty-one patients underwent scala vestibuli cochlear implantation: 19 adults and two children. Auditory performances were similar between groups, although sentence recognition in a noisy environment was slightly higher in the scala vestibuli group. Impedance values were also higher in the scala vestibuli group, but all other programming parameters were similar between groups. We present the largest series of patients with scala vestibuli cochlear implantation. This approach provides at least comparable auditory performances without having any deleterious effects on programming parameters. This viable and useful insertion route might be the primary surgical alternative when facing partial cochlear ossification.
Multisensory Integration Strategy for Modality-Specific Loss of Inhibition Control in Older Adults
Ryu, Hokyoung; Kim, Jae-Kwan; Jeong, Eunju
2018-01-01
Older adults are known to have lesser cognitive control capability and greater susceptibility to distraction than young adults. Previous studies have reported age-related problems in selective attention and inhibitory control, yielding mixed results depending on modality and context in which stimuli and tasks were presented. The purpose of the study was to empirically demonstrate a modality-specific loss of inhibitory control in processing audio-visual information with ageing. A group of 30 young adults (mean age = 25.23, Standard Deviation (SD) = 1.86) and 22 older adults (mean age = 55.91, SD = 4.92) performed the audio-visual contour identification task (AV-CIT). We compared performance of visual/auditory identification (Uni-V, Uni-A) with that of visual/auditory identification in the presence of distraction in counterpart modality (Multi-V, Multi-A). The findings showed a modality-specific effect on inhibitory control. Uni-V performance was significantly better than Multi-V, indicating that auditory distraction significantly hampered visual target identification. However, Multi-A performance was significantly enhanced compared to Uni-A, indicating that auditory target performance was significantly enhanced by visual distraction. Additional analysis showed an age-specific effect on enhancement between Uni-A and Multi-A depending on the level of visual inhibition. Together, our findings indicated that the loss of visual inhibitory control was beneficial for the auditory target identification presented in a multimodal context in older adults. A likely multisensory information processing strategy in the older adults was further discussed in relation to aged cognition. PMID:29641462
Neural network retuning and neural predictors of learning success associated with cello training.
Wollman, Indiana; Penhune, Virginia; Segado, Melanie; Carpentier, Thibaut; Zatorre, Robert J
2018-06-26
The auditory and motor neural systems are closely intertwined, enabling people to carry out tasks such as playing a musical instrument whose mapping between action and sound is extremely sophisticated. While the dorsal auditory stream has been shown to mediate these audio-motor transformations, little is known about how such mapping emerges with training. Here, we use longitudinal training on a cello as a model for brain plasticity during the acquisition of specific complex skills, including continuous and many-to-one audio-motor mapping, and we investigate individual differences in learning. We trained participants with no musical background to play on a specially designed MRI-compatible cello and scanned them before and after 1 and 4 wk of training. Activation of the auditory-to-motor dorsal cortical stream emerged rapidly during the training and was similarly activated during passive listening and cello performance of trained melodies. This network activation was independent of performance accuracy and therefore appears to be a prerequisite of music playing. In contrast, greater recruitment of regions involved in auditory encoding and motor control over the training was related to better musical proficiency. Additionally, pre-supplementary motor area activity and its connectivity with the auditory cortex during passive listening before training was predictive of final training success, revealing the integrative function of this network in auditory-motor information processing. Together, these results clarify the critical role of the dorsal stream and its interaction with auditory areas in complex audio-motor learning.
Lamas, Verónica; Estévez, Sheila; Pernía, Marianni; Plaza, Ignacio; Merchán, Miguel A
2017-10-11
The rat auditory cortex (AC) is becoming popular among auditory neuroscience investigators who are interested in experience-dependence plasticity, auditory perceptual processes, and cortical control of sound processing in the subcortical auditory nuclei. To address new challenges, a procedure to accurately locate and surgically expose the auditory cortex would expedite this research effort. Stereotactic neurosurgery is routinely used in pre-clinical research in animal models to engraft a needle or electrode at a pre-defined location within the auditory cortex. In the following protocol, we use stereotactic methods in a novel way. We identify four coordinate points over the surface of the temporal bone of the rat to define a window that, once opened, accurately exposes both the primary (A1) and secondary (Dorsal and Ventral) cortices of the AC. Using this method, we then perform a surgical ablation of the AC. After such a manipulation is performed, it is necessary to assess the localization, size, and extension of the lesions made in the cortex. Thus, we also describe a method to easily locate the AC ablation postmortem using a coordinate map constructed by transferring the cytoarchitectural limits of the AC to the surface of the brain.The combination of the stereotactically-guided location and ablation of the AC with the localization of the injured area in a coordinate map postmortem facilitates the validation of information obtained from the animal, and leads to a better analysis and comprehension of the data.
Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity.
Loria, Tristan; de Grosbois, John; Tremblay, Luc
2016-09-01
At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study sought to test whether visual and auditory cues are optimally integrated at that specific kinematic marker when it is the critical part of the trajectory. Participants performed an upper-limb movement in which they were required to reach their peak limb velocity when the right index finger intersected a virtual target (i.e., a flinging movement). Brief auditory, visual, or audiovisual feedback (i.e., 20 ms in duration) was provided to participants at peak limb velocity. Performance was assessed primarily through the resultant position of peak limb velocity and the variability of that position. Relative to when no feedback was provided, auditory feedback significantly reduced the resultant endpoint variability of the finger position at peak limb velocity. However, no such reductions were found for the visual or audiovisual feedback conditions. Further, providing both auditory and visual cues concurrently also failed to yield the theoretically predicted improvements in endpoint variability. Overall, the central nervous system can make significant use of an auditory cue but may not optimally integrate a visual and auditory cue at peak limb velocity, when peak velocity is the critical part of the trajectory.
Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A; Cao, Jiguo; Nie, Yunlong
2017-01-01
Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers' performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning.
Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A.; Cao, Jiguo; Nie, Yunlong
2017-01-01
Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers’ performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning. PMID:29255435
The perception of coherent and non-coherent auditory objects: a signature in gamma frequency band.
Knief, A; Schulte, M; Bertran, O; Pantev, C
2000-07-01
The pertinence of gamma band activity in magnetoencephalographic and electroencephalographic recordings for the performance of a gestalt recognition process is a question at issue. We investigated the functional relevance of gamma band activity for the perception of auditory objects. An auditory experiment was performed as an analog to the Kanizsa experiment in the visual modality, comprising four different coherent and non-coherent stimuli. For the first time functional differences of evoked gamma band activity due to the perception of these stimuli were demonstrated by various methods (localization of sources, wavelet analysis and independent component analysis, ICA). Responses to coherent stimuli were found to have more features in common compared to non-coherent stimuli (e.g. closer located sources and smaller number of ICA components). The results point to the existence of a pitch processor in the auditory pathway.
Neural correlates of auditory short-term memory in rostral superior temporal cortex
Scott, Brian H.; Mishkin, Mortimer; Yin, Pingbo
2014-01-01
Summary Background Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. Results We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed-match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing, and in their resistance to sounds intervening between the sample and match. Conclusions Like the monkeys’ behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM. PMID:25456448
Assessment of auditory skills in 140 cochlear implant children using the EARS protocol.
Sainz, Manuel; Skarzynski, Henryk; Allum, John H J; Helms, Jan; Rivas, Adriana; Martin, Jane; Zorowka, Patrick Georg; Phillips, Lucy; Delauney, Joseph; Brockmeyer, Steffi Johanna; Kompis, Martin; Korolewa, Inna; Albegger, Klaus; Zwirner, Petra; Van De Heyning, Paul; D'Haese, Patrick
2003-01-01
Auditory performance of cochlear implant (CI) children was assessed with the Listening Progress Profile (LiP) and the Monosyllabic-Trochee-Polysyllabic-Word Test (MTP) following the EARS protocol. Additionally, the 'initial drop' phenomenon, a recently reported decrease of auditory performance occurring immediately after first fitting, was investigated. Patients were 140 prelingually deafened children from various clinics and centers worldwide implanted with a MEDEL COMBI 40/40+. Analysis of LiP data showed a significant increase after 1 month of CI use compared to preoperative scores (p < 0.01). No initial decrease was observed with this test. Analysis of MTP data revealed a significant improvement of word recognition after 6 months (p < 0.01), with a significant temporary decrease after initial fitting (p < 0.01). With both tests, children's auditory skills improved up to 2 years. Amount of improvement was negatively correlated with age at implantation. Copyright 2003 S. Karger AG, Basel
Lateral pile cap load tests with gravel backfill of limited width.
DOT National Transportation Integrated Search
2010-08-01
This study investigated the increase in passive force produced by compacting a dense granular fill adjacent to a pile cap or abutment wall when the surrounding soil is in a relative loose state. Lateral load tests were performed on a pile cap with th...
DOT National Transportation Integrated Search
2001-01-01
This manual is a companion piece to the Commuter Assistance Program Evaluation Manual that was developed to assist Florida's Commuter Assistance Programs (CAP) in their efforts to measure and evaluate their performance. This manual is intended to pro...
Experimental demonstration of optical data links using a hybrid CAP/QAM modulation scheme.
Wei, J L; Ingham, J D; Cheng, Q; Cunningham, D G; Penty, R V; White, I H
2014-03-15
The first known experimental demonstrations of a 10 Gb/s hybrid CAP-2/QAM-2 and a 20 Gb/s hybrid CAP-4/QAM-4 transmitter/receiver-based optical data link are performed. Successful transmission over 4.3 km of standard single-mode fiber (SMF) is achieved, with a link power penalty ∼0.4 dBo for CAP-2/QAM-2 and ∼1.5 dBo for CAP-4/QAM-4 at BER=10(-9).
ERIC Educational Resources Information Center
Karasu, H. Pelin
2017-01-01
Written expression skills play an important role in the development of the linguistic, academic and social skills of individuals from their school years onwards. The aim of this study was to evaluate the written expression performance of hearing-impaired students who receive auditory-oral education, and examine the student characteristics that…
Is More Better? - Night Vision Enhancement System's Pedestrian Warning Modes and Older Drivers.
Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas
2010-01-01
Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers' workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers.
Is More Better? — Night Vision Enhancement System’s Pedestrian Warning Modes and Older Drivers
Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas
2010-01-01
Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers’ workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers. PMID:21050616
Terreros, Gonzalo; Jorratt, Pascal; Aedo, Cristian; Elgoyhen, Ana Belén; Delano, Paul H
2016-07-06
During selective attention, subjects voluntarily focus their cognitive resources on a specific stimulus while ignoring others. Top-down filtering of peripheral sensory responses by higher structures of the brain has been proposed as one of the mechanisms responsible for selective attention. A prerequisite to accomplish top-down modulation of the activity of peripheral structures is the presence of corticofugal pathways. The mammalian auditory efferent system is a unique neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear bundle, and it has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we trained wild-type and α-9 nicotinic receptor subunit knock-out (KO) mice, which lack cholinergic transmission between medial olivocochlear neurons and outer hair cells, in a two-choice visual discrimination task and studied the behavioral consequences of adding different types of auditory distractors. In addition, we evaluated the effects of contralateral noise on auditory nerve responses as a measure of the individual strength of the olivocochlear reflex. We demonstrate that KO mice have a reduced olivocochlear reflex strength and perform poorly in a visual selective attention paradigm. These results confirm that an intact medial olivocochlear transmission aids in ignoring auditory distraction during selective attention to visual stimuli. The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear system. It has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we studied the behavioral consequences of adding different types of auditory distractors in a visual selective attention task in wild-type and α-9 nicotinic receptor knock-out (KO) mice. We demonstrate that KO mice perform poorly in the selective attention paradigm and that an intact medial olivocochlear transmission aids in ignoring auditory distractors during attention. Copyright © 2016 the authors 0270-6474/16/367198-12$15.00/0.
Tsunoda, Naoko; Hashimoto, Mamoru; Ishikawa, Tomohisa; Fukuhara, Ryuji; Yuki, Seiji; Tanaka, Hibiki; Hatada, Yutaka; Miyagawa, Yusuke; Ikeda, Manabu
2018-05-08
Auditory hallucinations are an important symptom for diagnosing dementia with Lewy bodies (DLB), yet they have received less attention than visual hallucinations. We investigated the clinical features of auditory hallucinations and the possible mechanisms by which they arise in patients with DLB. We recruited 124 consecutive patients with probable DLB (diagnosis based on the DLB International Workshop 2005 criteria; study period: June 2007-January 2015) from the dementia referral center of Kumamoto University Hospital. We used the Neuropsychiatric Inventory to assess the presence of auditory hallucinations, visual hallucinations, and other neuropsychiatric symptoms. We reviewed all available clinical records of patients with auditory hallucinations to assess their clinical features. We performed multiple logistic regression analysis to identify significant independent predictors of auditory hallucinations. Of the 124 patients, 44 (35.5%) had auditory hallucinations and 75 (60.5%) had visual hallucinations. The majority of patients (90.9%) with auditory hallucinations also had visual hallucinations. Auditory hallucinations consisted mostly of human voices, and 90% of patients described them as like hearing a soundtrack of the scene. Multiple logistic regression showed that the presence of auditory hallucinations was significantly associated with female sex (P = .04) and hearing impairment (P = .004). The analysis also revealed independent correlations between the presence of auditory hallucinations and visual hallucinations (P < .001), phantom boarder delusions (P = .001), and depression (P = .038). Auditory hallucinations are common neuropsychiatric symptoms in DLB and usually appear as a background soundtrack accompanying visual hallucinations. Auditory hallucinations in patients with DLB are more likely to occur in women and those with impaired hearing, depression, delusions, or visual hallucinations. © Copyright 2018 Physicians Postgraduate Press, Inc.
Fundamental deficits of auditory perception in Wernicke's aphasia.
Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen
2013-01-01
This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.
Lalani, Sanam J; Duffield, Tyler C; Trontel, Haley G; Bigler, Erin D; Abildskov, Tracy J; Froehlich, Alyson; Prigge, Molly B D; Travers, Brittany G; Anderson, Jeffrey S; Zielinski, Brandon A; Alexander, Andrew; Lange, Nicholas; Lainhart, Janet E
2018-06-01
Studies have shown that individuals with autism spectrum disorder (ASD) tend to perform significantly below typically developing individuals on standardized measures of attention, even when controlling for IQ. The current study sought to examine within ASD whether anatomical correlates of attention performance differed between those with average to above-average IQ (AIQ group) and those with low-average to borderline ability (LIQ group) as well as in comparison to typically developing controls (TDC). Using automated volumetric analyses, we examined regional volume of classic attention areas including the superior frontal gyrus, anterior cingulate cortex, and precuneus in ASD AIQ (n = 38) and LIQ (n = 18) individuals along with 30 TDC. Auditory attention performance was assessed using subtests of the Test of Memory and Learning (TOMAL) compared among the groups and then correlated with regional brain volumes. Analyses revealed group differences in attention. The three groups did not differ significantly on any auditory attention-related brain volumes; however, trends toward significant size-attention function interactions were observed. Negative correlations were found between the volume of the precuneus and auditory attention performance for the AIQ ASD group, indicating larger volume related to poorer performance. Implications for general attention functioning and dysfunctional neural connectivity in ASD are discussed.
Most, Tova; Michaelis, Hilit
2012-08-01
This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.
Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro
2013-01-01
The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338
Association of blood antioxidants status with visual and auditory sustained attention.
Shiraseb, Farideh; Siassi, Fereydoun; Sotoudeh, Gity; Qorbani, Mostafa; Rostami, Reza; Sadeghi-Firoozabadi, Vahid; Narmaki, Elham
2015-01-01
A low antioxidants status has been shown to result in oxidative stress and cognitive impairment. Because antioxidants can protect the nervous system, it is expected that a better blood antioxidant status might be related to sustained attention. However, the relationship between the blood antioxidant status and visual and auditory sustained attention has not been investigated. The aim of this study was to evaluate the association of fruits and vegetables intake and the blood antioxidant status with visual and auditory sustained attention in women. This cross-sectional study was performed on 400 healthy women (20-50 years) who attended the sports clubs of Tehran Municipality. Sustained attention was evaluated based on the Integrated Visual and Auditory Continuous Performance Test using the Integrated Visual and Auditory (IVA) software. The 24-hour food recall questionnaire was used for estimating fruits and vegetables intake. Serum total antioxidant capacity (TAC), and erythrocyte superoxide dismutase (SOD) and glutathione peroxidase (GPx) activities were measured in 90 participants. After adjusting for energy intake, age, body mass index (BMI), years of education and physical activity, higher reported fruits, and vegetables intake was associated with better visual and auditory sustained attention (P < 0.001). A high intake of some subgroups of fruits and vegetables (i.e. berries, cruciferous vegetables, green leafy vegetables, and other vegetables) was also associated with better sustained attention (P < 0.02). Serum TAC, and erythrocyte SOD and GPx activities increased with the increase in the tertiles of visual and auditory sustained attention after adjusting for age, years of education, physical activity, energy, BMI, and caffeine intake (P < 0.05). Improved visual and auditory sustained attention is associated with a better blood antioxidant status. Therefore, improvement of the antioxidant status through an appropriate dietary intake can possibly enhance sustained attention.
Schultz, Benjamin G; van Vugt, Floris T
2016-12-01
Timing abilities are often measured by having participants tap their finger along with a metronome and presenting tap-triggered auditory feedback. These experiments predominantly use electronic percussion pads combined with software (e.g., FTAP or Max/MSP) that records responses and delivers auditory feedback. However, these setups involve unknown latencies between tap onset and auditory feedback and can sometimes miss responses or record multiple, superfluous responses for a single tap. These issues may distort measurements of tapping performance or affect the performance of the individual. We present an alternative setup using an Arduino microcontroller that addresses these issues and delivers low-latency auditory feedback. We validated our setup by having participants (N = 6) tap on a force-sensitive resistor pad connected to the Arduino and on an electronic percussion pad with various levels of force and tempi. The Arduino delivered auditory feedback through a pulse-width modulation (PWM) pin connected to a headphone jack or a wave shield component. The Arduino's PWM (M = 0.6 ms, SD = 0.3) and wave shield (M = 2.6 ms, SD = 0.3) demonstrated significantly lower auditory feedback latencies than the percussion pad (M = 9.1 ms, SD = 2.0), FTAP (M = 14.6 ms, SD = 2.8), and Max/MSP (M = 15.8 ms, SD = 3.4). The PWM and wave shield latencies were also significantly less variable than those from FTAP and Max/MSP. The Arduino missed significantly fewer taps, and recorded fewer superfluous responses, than the percussion pad. The Arduino captured all responses, whereas at lower tapping forces, the percussion pad missed more taps. Regardless of tapping force, the Arduino outperformed the percussion pad. Overall, the Arduino is a high-precision, low-latency, portable, and affordable tool for auditory experiments.
Thirumala, Parthasarathy D; Krishnaiah, Balaji; Crammond, Donald J; Habeych, Miguel E; Balzer, Jeffrey R
2014-04-01
Intraoperative monitoring of brain stem auditory evoked potential during microvascular decompression (MVD) prevent hearing loss (HL). Previous studies have shown that changes in wave III (wIII) are an early and sensitive sign of auditory nerve injury. To evaluate the changes of amplitude and latency of wIII of brain stem auditory evoked potential during MVD and its association with postoperative HL. Hearing loss was classified by American Academy of Otolaryngology - Head and Neck Surgery (AAO-HNS) criteria, based on changes in pure tone audiometry and speech discrimination score. Retrospective analysis of wIII in patients who underwent intraoperative monitoring with brain stem auditory evoked potential during MVD was performed. A univariate logistic regression analysis was performed on independent variables amplitude of wIII and latency of wIII at change max and On-Skin, or a final recording at the time of skin closure. A further analysis for the same variables was performed adjusting for the loss of wave. The latency of wIII was not found to be significantly different between groups I and II. The amplitude of wIII was significantly decreased in the group with HL. Regression analysis did not find any increased odds of HL with changes in the amplitude of wIII. Changes in wave III did not increase the odds of HL in patients who underwent brain stem auditory evoked potential s during MVD. This information might be valuable to evaluate the value of wIII as an alarm criterion during MVD to prevent HL.
Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.
Barbosa, Sara; Pires, Gabriel; Nunes, Urbano
2016-03-01
Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.
Compatibility of motion facilitates visuomotor synchronization.
Hove, Michael J; Spivey, Michael J; Krumhansl, Carol L
2010-12-01
Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1, synchronization success rates increased dramatically for spatiotemporal sequences of both geometric and biological forms over flashing sequences. In Experiment 2, synchronization performance was best when target sequences and movements were directionally compatible (i.e., simultaneously down), followed by orthogonal stimuli, and was poorest for incompatible moving stimuli and flashing stimuli. In Experiment 3, synchronization performance was best with auditory sequences, followed by compatible moving stimuli, and was worst for flashing and fading stimuli. Results indicate that visuomotor synchronization improves dramatically with compatible spatial information. However, an auditory advantage in sensorimotor synchronization persists.
Steel Foil Improves Performance Of Blasting Caps
NASA Technical Reports Server (NTRS)
Bement, Laurence J.; Perry, Ronnie; Schimmel, Morry L.
1990-01-01
Blasting caps, which commonly include deep-drawn aluminum cups, give significantly higher initiation performance by application of steel foils on output faces. Steel closures 0.005 in. (0.13 mm) thick more effective than aluminum. Caps with directly bonded steel foil produce fragment velocities of 9,300 ft/s (2.8 km/s) with large craters and unpredictable patterns to such degree that no attempts made to initiate explosions. Useful in military and aerospace applications and in specialized industries as mining and exploration for oil.
Wu, Meng; Liu, Jia; Li, Weitao; Liu, Ming; Jiang, Chunyu; Li, Zhongpei
2017-10-01
Chlorantraniliprole (CAP) is a newly developed insecticide widely used in rice fields in China. There has been few studies evaluating the toxicological effects of CAP on soil-associated microbes. An 85-day microcosm experiment was performed to reveal the dissipation dynamics of CAP in three types of paddy soils in subtropical China. The effects of CAP on microbial activities (microbial biomass carbon-MBC, basal soil respiration-BSR, microbial metabolic quotient-qCO 2 , acid phosphatase and sucrose invertase activities) in the soils were periodically evaluated. Microbial phospholipid fatty acid (PLFA) analysis was used to evaluate the change of soil microbial community composition on day 14 and 50 of the experiment. CAP residues were extracted using the quick, easy, cheap, effective, rugged, and safe (QuChERS) method and quantification was measured by high performance liquid chromatography (HPLC). The half-lives (DT 50 ) of CAP were in the range of 41.0-53.0 days in the three soils. The results showed that CAP did not impart negative effects on MBC during the incubation. CAP inhibited BSR, qCO 2 , acid phosphatase and sucrose invertase activities in the first 14 days of incubation in all the soils. After day 14, the soil microbial parameters of CAP-treated soils became statistically at par with their controls. Principal component analysis (PCA) determining abundance of biomarker PLFAs indicated that the application of CAP significantly changed the compositions of microbial communities in all three paddy soils on day 14 but the compositions of soil microbial communities recovered by day 50. This study indicates that CAP does not ultimately impair microbial activities and microbial compositions of these three paddy soil types. Copyright © 2017 Elsevier Inc. All rights reserved.
Auditory conflict and congruence in frontotemporal dementia.
Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D
2017-09-01
Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
[Genetic, epidemiologic and clinical study of familial prostate cancer].
Valéri, Antoine
2002-01-01
Prostate cancer (CaP) is the most frequent cancer among men over 50 and its frequency increases with age. It has become a significant public health problem due to the ageing population. Epidemiologists report familial aggregation in 15 to 25% of cases and inherited susceptibility with autosomal dominant or X-linked model in 5 to 10% of cases. Clinical and biological features of familial CaP remain controversial. To perform: (1) Genetic study of familial Cap (mapping of susceptibility genes), (2) epidemiologic study (prevalence, associated cancers in the genealogy, model of transmission), and clinical study of familial CaP. (I) conducting a nationwide family collection (ProGène study) with 2+ CaP we have performed a genomewide linkage analysis and identified a predisposing locus on 1q42.2-43 named PCaP (Predisposing to Cancer of the Prostate); (II) conducting a systematic genealogic analysis of 691 CaP followed up in 3 University departments of urology (Hospitals of Brest, Paris St Louis and Nancy) we have observed: (1) 14.2% of familial and 3.6% of hereditary CaP, (2) a higher risk of breast cancer in first degree relatives of probands (CaP+) in familial CaP than in sporadic CaP and in early onset CaP (< 55 years) when compared with late onset CaP ([dG]75 years), (3) an autosomal dominant model with brother-brother dependance), (4) the lack of specific clinical or biological feature (except for early onset) in hereditary CaP when compared with sporadic CaP. (1) The mapping of a susceptibility locus will permit the cloning of a predisposing gene on 1q42.2-43, offer the possibility of genetic screening in families at risk and permit genotype/phenotype correlation studies; (2) the transmission model will improve parameteric linkage studies; (3) the lack of distinct specific clinical patterns suggest diagnostic and follow up modalities for familial and hereditary CaP similar to sporadic cancer while encouraging early screening of families at risk, given the earlier onset (5 to 10 years earlier) observed.
JTF CapMed Initial Outfitting and Transition (IO&T) - History, Process, Benefits
2011-01-26
Sharing Knowledge: Achieving Breakthrough Performance 2010 Military Health System Conference JTF CapMed 26 January, 2011 CAPT Russell Pendergrass...The Quadruple Aim: Working Together, Achieving Success 2011 Military Health System Confer nce JTF CapMed Initial Outfitting and Transition (IO&T...number. 1. REPORT DATE 26 JAN 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND SUBTITLE JTF CapMed Initial Outfitting
Sopena, N; Sabrià-Leal, M; Pedro-Botet, M L; Padilla, E; Dominguez, J; Morera, J; Tudela, P
1998-05-01
The aim of this study was to compare the clinical, biological, and radiologic features of presentation in the emergency ward of community-acquired pneumonia (CAP) by Legionella pneumophila (LP) and other community-acquired bacterial pneumonias to help in early diagnosis of CAP by LP. Three hundred ninety-two patients with CAP were studied prospectively in the emergency department of a 600-bed university hospital. Univariate and multivariate analyses were performed to compare epidemiologic and demographic data and clinical, analytical, and radiologic features of presentation in 48 patients with CAP by LP and 125 patients with CAP by other bacterial etiology (68 by Streptococcus pneumoniae, 41 by Chlamydia pneumoniae, 5 by Mycoplasma pneumoniae, 4 by Coxiella burnetii, 3 by Pseudomonas aeruginosa, 2 by Haemophilus influenzae, and 2 by Nocardia species. Univariate analysis showed that CAP by LP was more frequent in middle-aged, male healthy (but alcohol drinking) patients than CAP by other etiology. Moreover, the lack of response to previous beta-lactamic drugs, headache, diarrhea, severe hyponatremia, and elevation in serum creatine kinase (CK) levels on presentation were more frequent in CAP by LP, while cough, expectoration, and thoracic pain were more frequent in CAP by other bacterial etiology. However, multivariate analysis only confirmed these differences with respect to lack of underlying disease, diarrhea, and elevation in the CK level. We conclude that detailed analysis of features of presentation of CAP allows suspicion of Legionnaire's disease in the emergency department. The initiation of antibiotic treatment, including a macrolide, and the performance of rapid diagnostic techniques are mandatory in these cases.
Engel, M F; van Velzen, M; Hoepelman, A I M; Thijsen, S; Oosterheert, J J
2013-04-01
A positive pneumococcal urinary antigen test (PUAT) for Streptococcus pneumoniae allows an early switch from empiric to targeted treatment in hospitalised community-acquired pneumonia (CAP) patients. The economic and treatment consequences of this widespread implemented test are, however, unknown. We retrospectively evaluated all tests performed since its introduction in two teaching hospitals. Data on patient characteristics, treatment, admission and outcome were retrieved from the electronic patient files. Test benefits were expressed as the number of days that targeted therapy (i.e. penicillin) was administered to hospitalised CAP patients due to a positive PUAT. This calculation was based on the timing of the PUAT and the initiation of targeted therapy. Subsequently, we performed two direct cost analyses from a hospital perspective, first including tests performed for CAP only, and second including costs of all (excessive) tests. Between 2005 and 2012, 3,479 PUATs were performed, of which 1,907 (55 %) were for CAP. A total of 1,638 PUATs (86 %) were negative and 269 (14 %) were positive. Fifty-two (19 %) positive tests were excluded. In 75 (35 %) of the 217 remaining positive tests, a positive PUAT led to targeted treatment during 293 cumulative admission days. Testing costs for CAP only were €131 per targeted treatment day. These costs were €257 if local protocol dictated PUAT use for all CAP cases, as opposed to €72 if the test was reserved for severe cases only. When including all tests, PUAT costs were €254 per targeted treatment day. Therefore, improving the selective use of the PUAT in hospitalised CAP patients may lead to increased (cost-)efficiency.
Gonzalez, Jose; Soma, Hirokazu; Sekine, Masashi; Yu, Wenwei
2012-06-09
Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues) have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user's mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. 10 male subjects (26+/-years old), participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF), Visual Feedback only control (VF), and Audiovisual Feedback control (AVF). For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject's EEG, ECG, electro-dermal activity (EDA), and respiration rate were measured. The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback). Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. The performance improvements when using auditory cues, along with vision (multimodal feedback), can be attributed to a reduced attentional demand during the task, which can be attributed to a visual "pop-out" or enhance effect. Also, the NASA TLX, the EEG's Alpha and Beta band, and the Heart Rate could be used to further evaluate sensory feedback systems in prosthetic applications.
Brüggemann, Petra; Szczepek, Agnieszka J.; Klee, Katharina; Gräbel, Stefan; Mazurek, Birgit; Olze, Heidi
2017-01-01
Cochlear implantation (CI) is increasingly being used in the auditory rehabilitation of deaf patients. Here, we investigated whether the auditory rehabilitation can be influenced by the psychological burden caused by mental conditions. Our sample included 47 patients who underwent implantation. All patients were monitored before and 6 months after CI. Auditory performance was assessed using the Oldenburg Inventory (OI) and Freiburg monosyllable (FB MS) speech discrimination test. The health-related quality of life was measured with Nijmegen Cochlear implantation Questionnaire (NCIQ) whereas tinnitus-related distress was measured with the German version of Tinnitus Questionnaire (TQ). We additionally assessed the general perceived quality of life, the perceived stress, coping abilities, anxiety levels and the depressive symptoms. Finally, a structured interview to detect mental conditions (CIDI) was performed before and after surgery. We found that CI led to an overall improvement in auditory performance as well as the anxiety and depression, quality of life, tinnitus distress and coping strategies. CIDI revealed that 81% of patients in our sample had affective, anxiety, and/or somatoform disorders before or after CI. The affective disorders included dysthymia and depression, while anxiety disorders included agoraphobias and unspecified phobias. We also diagnosed cases of somatoform pain disorders and unrecognizable figure somatoform disorders. We found a positive correlation between the auditory performance and the decrease of anxiety and depression, tinnitus-related distress and perceived stress. There was no association between the presence of a mental condition itself and the outcome of auditory rehabilitation. We conclude that the CI candidates exhibit high rates of psychological disorders, and there is a particularly strong association between somatoform disorders and tinnitus. The presence of mental disorders remained unaffected by CI but the degree of psychological burden decreased significantly post-CI. The implants benefitted patients in a number of psychosocial areas, improving the symptoms of depression and anxiety, tinnitus, and their quality of life and coping strategies. The prevalence of mental disorders in patients who are candidates for CI suggests the need for a comprehensive psychological and psychosomatic management of their treatment. PMID:28529479
Short-term memory stores organized by information domain.
Noyce, Abigail L; Cestero, Nishmar; Shinn-Cunningham, Barbara G; Somers, David C
2016-04-01
Vision and audition have complementary affinities, with vision excelling in spatial resolution and audition excelling in temporal resolution. Here, we investigated the relationships among the visual and auditory modalities and spatial and temporal short-term memory (STM) using change detection tasks. We created short sequences of visual or auditory items, such that each item within a sequence arose at a unique spatial location at a unique time. On each trial, two successive sequences were presented; subjects attended to either space (the sequence of locations) or time (the sequence of inter item intervals) and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal trials (both sequences presented in the same modality) and crossmodal trials (Sequence 1 visual, Sequence 2 auditory, or vice versa) for both spatial and temporal tasks. We found a strong interaction between modality and task: Spatial performance was best on unimodal visual trials, whereas temporal performance was best on unimodal auditory trials. The order of modalities on crossmodal trials also mattered, suggesting that perceptual fidelity at encoding is critical to STM. Critically, no cost was attributable to crossmodal comparison: In both tasks, performance on crossmodal trials was as good as or better than on the weaker unimodal trials. STM representations of space and time can guide change detection in either the visual or the auditory modality, suggesting that the temporal or spatial organization of STM may supersede sensory-specific organization.
Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.
Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C
2015-11-04
Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition and global functional outcome. This study evaluated neural substrates of impaired AER in schizophrenia using a combined event-related potential/resting-state fMRI approach. Patients showed impaired mismatch negativity response to emotionally relevant frequency modulated tones along with impaired functional connectivity between auditory and medial temporal (anterior insula) cortex. These deficits contributed in parallel to impaired AER and accounted for ∼50% of variance in AER performance. Overall, these findings demonstrate the importance of both auditory-level dysfunction and impaired auditory/insula connectivity in the pathophysiology of social cognitive dysfunction in schizophrenia. Copyright © 2015 the authors 0270-6474/15/3514910-13$15.00/0.
Retention Strength of Conical Welding Caps for Fixed Implant-Supported Prostheses.
Nardi, Diego; Degidi, Marco; Sighinolfi, Gianluca; Tebbel, Florian; Marchetti, Claudio
This study evaluated the retention strength of welding caps for Ankylos standard abutments using a pull-out test. Each sample consisted of an implant abutment and its welding cap. The tests were performed with a Zwick Roell testing machine with a 1-kN load cell. The retention strength of the welding caps increased with higher abutment diameters and higher head heights and was comparable or superior to the values reported in the literature for the temporary cements used in implant dentistry. Welding caps provide a reliable connection between an abutment and a fixed prosthesis without the use of cement.
Auditory Temporal Resolution in Individuals with Diabetes Mellitus Type 2.
Mishra, Rajkishor; Sanju, Himanshu Kumar; Kumar, Prawin
2016-10-01
Introduction "Diabetes mellitus is a group of metabolic disorders characterized by elevated blood sugar and abnormalities in insulin secretion and action" (American Diabetes Association). Previous literature has reported connection between diabetes mellitus and hearing impairment. There is a dearth of literature on auditory temporal resolution ability in individuals with diabetes mellitus type 2. Objective The main objective of the present study was to assess auditory temporal resolution ability through GDT (Gap Detection Threshold) in individuals with diabetes mellitus type 2 with high frequency hearing loss. Methods Fifteen subjects with diabetes mellitus type 2 with high frequency hearing loss in the age range of 30 to 40 years participated in the study as the experimental group. Fifteen age-matched non-diabetic individuals with normal hearing served as the control group. We administered the Gap Detection Threshold (GDT) test to all participants to assess their temporal resolution ability. Result We used the independent t -test to compare between groups. Results showed that the diabetic group (experimental) performed significantly poorer compared with the non-diabetic group (control). Conclusion It is possible to conclude that widening of auditory filters and changes in the central auditory nervous system contributed to poorer performance for temporal resolution task (Gap Detection Threshold) in individuals with diabetes mellitus type 2. Findings of the present study revealed the deteriorating effect of diabetes mellitus type 2 at the central auditory processing level.
Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.
Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D
2016-01-01
The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.
The modality effect of ego depletion: Auditory task modality reduces ego depletion.
Li, Qiong; Wang, Zhenhong
2016-08-01
An initial act of self-control that impairs subsequent acts of self-control is called ego depletion. The ego depletion phenomenon has been observed consistently. The modality effect refers to the effect of the presentation modality on the processing of stimuli. The modality effect was also robustly found in a large body of research. However, no study to date has examined the modality effects of ego depletion. This issue was addressed in the current study. In Experiment 1, after all participants completed a handgrip task, one group's participants completed a visual attention regulation task and the other group's participants completed an auditory attention regulation task, and then all participants again completed a handgrip task. The ego depletion phenomenon was observed in both the visual and the auditory attention regulation task. Moreover, participants who completed the visual task performed worse on the handgrip task than participants who completed the auditory task, which indicated that there was high ego depletion in the visual task condition. In Experiment 2, participants completed an initial task that either did or did not deplete self-control resources, and then they completed a second visual or auditory attention control task. The results indicated that depleted participants performed better on the auditory attention control task than the visual attention control task. These findings suggest that altering task modality may reduce ego depletion. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
A sound advantage: Increased auditory capacity in autism.
Remington, Anna; Fairnie, Jake
2017-09-01
Autism Spectrum Disorder (ASD) has an intriguing auditory processing profile. Individuals show enhanced pitch discrimination, yet often find seemingly innocuous sounds distressing. This study used two behavioural experiments to examine whether an increased capacity for processing sounds in ASD could underlie both the difficulties and enhanced abilities found in the auditory domain. Autistic and non-autistic young adults performed a set of auditory detection and identification tasks designed to tax processing capacity and establish the extent of perceptual capacity in each population. Tasks were constructed to highlight both the benefits and disadvantages of increased capacity. Autistic people were better at detecting additional unexpected and expected sounds (increased distraction and superior performance respectively). This suggests that they have increased auditory perceptual capacity relative to non-autistic people. This increased capacity may offer an explanation for the auditory superiorities seen in autism (e.g. heightened pitch detection). Somewhat counter-intuitively, this same 'skill' could result in the sensory overload that is often reported - which subsequently can interfere with social communication. Reframing autistic perceptual processing in terms of increased capacity, rather than a filtering deficit or inability to maintain focus, increases our understanding of this complex condition, and has important practical implications that could be used to develop intervention programs to minimise the distress that is often seen in response to sensory stimuli. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Tuning in to the Voices: A Multisite fMRI Study of Auditory Hallucinations
Ford, Judith M.; Roach, Brian J.; Jorgensen, Kasper W.; Turner, Jessica A.; Brown, Gregory G.; Notestine, Randy; Bischoff-Grethe, Amanda; Greve, Douglas; Wible, Cynthia; Lauriello, John; Belger, Aysenil; Mueller, Bryon A.; Calhoun, Vincent; Preda, Adrian; Keator, David; O'Leary, Daniel S.; Lim, Kelvin O.; Glover, Gary; Potkin, Steven G.; Mathalon, Daniel H.
2009-01-01
Introduction: Auditory hallucinations or voices are experienced by 75% of people diagnosed with schizophrenia. We presumed that auditory cortex of schizophrenia patients who experience hallucinations is tonically “tuned” to internal auditory channels, at the cost of processing external sounds, both speech and nonspeech. Accordingly, we predicted that patients who hallucinate would show less auditory cortical activation to external acoustic stimuli than patients who did not. Methods: At 9 Functional Imaging Biomedical Informatics Research Network (FBIRN) sites, whole-brain images from 106 patients and 111 healthy comparison subjects were collected while subjects performed an auditory target detection task. Data were processed with the FBIRN processing stream. A region of interest analysis extracted activation values from primary (BA41) and secondary auditory cortex (BA42), auditory association cortex (BA22), and middle temporal gyrus (BA21). Patients were sorted into hallucinators (n = 66) and nonhallucinators (n = 40) based on symptom ratings done during the previous week. Results: Hallucinators had less activation to probe tones in left primary auditory cortex (BA41) than nonhallucinators. This effect was not seen on the right. Discussion: Although “voices” are the anticipated sensory experience, it appears that even primary auditory cortex is “turned on” and “tuned in” to process internal acoustic information at the cost of processing external sounds. Although this study was not designed to probe cortical competition for auditory resources, we were able to take advantage of the data and find significant effects, perhaps because of the power afforded by such a large sample. PMID:18987102
Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness
Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.
2014-01-01
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752
Measuring atmospheric visibility cavity attenuated phase shift spectroscopy
NASA Astrophysics Data System (ADS)
Jie, Guo; Ye, Shan-Shan; Yang, Xiao; Han, Ye-Xing; Tang, Huai-Wu; Yu, Zhi-Wei
2016-10-01
In the paper, an accurate and sensitive cavity attenuated phase shift spectroscopy (CAPS) system was used to monitor the atmospheric visibility coefficient in urban areas. The CAPS system, which detects the atmospheric visibility within a 10 nm bandpass centered at 532 nm, comprises a green LED with center wavelength in 532nm, a resonant optical cavity (36 cm length), a Photo Multiplier Tube detector and a lock in amplifier. The performance of the CAPS system was evaluated by measuring of the stability and response of the system. The minima ( 0.06 Mm-1) in the Allan plots show the optimum average time( 80s) for optimum detection performance of the CAPS system. The 2L/min flow rate, the CAPS system rise and fall response time is about 15 s, so as to realize the fast measurement of visibility. By comparing the forward scatter visibility meter measurement results, the CAPS system measurement results are verified reliably, and have high precision measurement. These figures indicate that this method has the potential to become one of the most sensitive on-line analytical techniques for atmospheric visibility detection.
ImmunoCAP assays: Pros and cons in allergology.
van Hage, Marianne; Hamsten, Carl; Valenta, Rudolf
2017-10-01
Allergen-specific IgE measurements and the clinical history are the cornerstones of allergy diagnosis. During the past decades, both characterization and standardization of allergen extracts and assay technology have improved. Here we discuss the uses, advantages, misinterpretations, and limitations of ImmunoCAP IgE assays (Thermo Fisher Scientific/Phadia, Uppsala, Sweden) in the field of allergology. They can be performed as singleplex (ImmunoCAP) and, for the last decade, as multiplex (Immuno Solid-phase Allergen Chip [ISAC]). The major benefit of ImmunoCAP is the obtained quantified allergen-specific IgE antibody level and the lack of interference from allergen-specific IgG antibodies. However, ImmunoCAP allergen extracts are limited to the composition of the extract. The introduction of allergen molecules has had a major effect on analytic specificity and allergy diagnosis. They are used in both singleplex ImmunoCAP and multiplex ImmunoCAP ISAC assays. The major advantage of ISAC is the comprehensive IgE pattern obtained with a minute amount of serum. The shortcomings are its semiquantitative measurements, lower linear range, and cost per assay. With respect to assay performance, ImmunoCAP allergen extracts are good screening tools, but allergen molecules dissect the IgE response on a molecular level and put allergy research on the map of precision medicine. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Gieseler, Anja; Tahden, Maike A. S.; Thiel, Christiane M.; Wagener, Kirsten C.; Meis, Markus; Colonius, Hans
2017-01-01
Differences in understanding speech in noise among hearing-impaired individuals cannot be explained entirely by hearing thresholds alone, suggesting the contribution of other factors beyond standard auditory ones as derived from the audiogram. This paper reports two analyses addressing individual differences in the explanation of unaided speech-in-noise performance among n = 438 elderly hearing-impaired listeners (mean = 71.1 ± 5.8 years). The main analysis was designed to identify clinically relevant auditory and non-auditory measures for speech-in-noise prediction using auditory (audiogram, categorical loudness scaling) and cognitive tests (verbal-intelligence test, screening test of dementia), as well as questionnaires assessing various self-reported measures (health status, socio-economic status, and subjective hearing problems). Using stepwise linear regression analysis, 62% of the variance in unaided speech-in-noise performance was explained, with measures Pure-tone average (PTA), Age, and Verbal intelligence emerging as the three most important predictors. In the complementary analysis, those individuals with the same hearing loss profile were separated into hearing aid users (HAU) and non-users (NU), and were then compared regarding potential differences in the test measures and in explaining unaided speech-in-noise recognition. The groupwise comparisons revealed significant differences in auditory measures and self-reported subjective hearing problems, while no differences in the cognitive domain were found. Furthermore, groupwise regression analyses revealed that Verbal intelligence had a predictive value in both groups, whereas Age and PTA only emerged significant in the group of hearing aid NU. PMID:28270784
Fritz, Jonathan B; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C
2016-06-01
While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30 to 40s to a duration of ~1 to 2s, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.
Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Wagener, Kirsten C; Meis, Markus; Colonius, Hans
2017-01-01
Differences in understanding speech in noise among hearing-impaired individuals cannot be explained entirely by hearing thresholds alone, suggesting the contribution of other factors beyond standard auditory ones as derived from the audiogram. This paper reports two analyses addressing individual differences in the explanation of unaided speech-in-noise performance among n = 438 elderly hearing-impaired listeners ( mean = 71.1 ± 5.8 years). The main analysis was designed to identify clinically relevant auditory and non-auditory measures for speech-in-noise prediction using auditory (audiogram, categorical loudness scaling) and cognitive tests (verbal-intelligence test, screening test of dementia), as well as questionnaires assessing various self-reported measures (health status, socio-economic status, and subjective hearing problems). Using stepwise linear regression analysis, 62% of the variance in unaided speech-in-noise performance was explained, with measures Pure-tone average (PTA), Age , and Verbal intelligence emerging as the three most important predictors. In the complementary analysis, those individuals with the same hearing loss profile were separated into hearing aid users (HAU) and non-users (NU), and were then compared regarding potential differences in the test measures and in explaining unaided speech-in-noise recognition. The groupwise comparisons revealed significant differences in auditory measures and self-reported subjective hearing problems, while no differences in the cognitive domain were found. Furthermore, groupwise regression analyses revealed that Verbal intelligence had a predictive value in both groups, whereas Age and PTA only emerged significant in the group of hearing aid NU.
Prediction of cognitive outcome based on the progression of auditory discrimination during coma.
Juan, Elsa; De Lucia, Marzia; Tzovara, Athina; Beaud, Valérie; Oddo, Mauro; Clarke, Stephanie; Rossetti, Andrea O
2016-09-01
To date, no clinical test is able to predict cognitive and functional outcome of cardiac arrest survivors. Improvement of auditory discrimination in acute coma indicates survival with high specificity. Whether the degree of this improvement is indicative of recovery remains unknown. Here we investigated if progression of auditory discrimination can predict cognitive and functional outcome. We prospectively recorded electroencephalography responses to auditory stimuli of post-anoxic comatose patients on the first and second day after admission. For each recording, auditory discrimination was quantified and its evolution over the two recordings was used to classify survivors as "predicted" when it increased vs. "other" if not. Cognitive functions were tested on awakening and functional outcome was assessed at 3 months using the Cerebral Performance Categories (CPC) scale. Thirty-two patients were included, 14 "predicted survivors" and 18 "other survivors". "Predicted survivors" were more likely to recover basic cognitive functions shortly after awakening (ability to follow a standardized neuropsychological battery: 86% vs. 44%; p=0.03 (Fisher)) and to show a very good functional outcome at 3 months (CPC 1: 86% vs. 33%; p=0.004 (Fisher)). Moreover, progression of auditory discrimination during coma was strongly correlated with cognitive performance on awakening (phonemic verbal fluency: rs=0.48; p=0.009 (Spearman)). Progression of auditory discrimination during coma provides early indication of future recovery of cognitive functions. The degree of improvement is informative of the degree of functional impairment. If confirmed in a larger cohort, this test would be the first to predict detailed outcome at the single-patient level. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Bood, Robert Jan; Nijssen, Marijn; van der Kamp, John; Roerdink, Melvyn
2013-01-01
Acoustic stimuli, like music and metronomes, are often used in sports. Adjusting movement tempo to acoustic stimuli (i.e., auditory-motor synchronization) may be beneficial for sports performance. However, music also possesses motivational qualities that may further enhance performance. Our objective was to examine the relative effects of auditory-motor synchronization and the motivational impact of acoustic stimuli on running performance. To this end, 19 participants ran to exhaustion on a treadmill in 1) a control condition without acoustic stimuli, 2) a metronome condition with a sequence of beeps matching participants’ cadence (synchronization), and 3) a music condition with synchronous motivational music matched to participants’ cadence (synchronization+motivation). Conditions were counterbalanced and measurements were taken on separate days. As expected, time to exhaustion was significantly longer with acoustic stimuli than without. Unexpectedly, however, time to exhaustion did not differ between metronome and motivational music conditions, despite differences in motivational quality. Motivational music slightly reduced perceived exertion of sub-maximal running intensity and heart rates of (near-)maximal running intensity. The beat of the stimuli –which was most salient during the metronome condition– helped runners to maintain a consistent pace by coupling cadence to the prescribed tempo. Thus, acoustic stimuli may have enhanced running performance because runners worked harder as a result of motivational aspects (most pronounced with motivational music) and more efficiently as a result of auditory-motor synchronization (most notable with metronome beeps). These findings imply that running to motivational music with a very prominent and consistent beat matched to the runner’s cadence will likely yield optimal effects because it helps to elevate physiological effort at a high perceived exertion, whereas the consistent and correct cadence induced by auditory-motor synchronization helps to optimize running economy. PMID:23951000
Bood, Robert Jan; Nijssen, Marijn; van der Kamp, John; Roerdink, Melvyn
2013-01-01
Acoustic stimuli, like music and metronomes, are often used in sports. Adjusting movement tempo to acoustic stimuli (i.e., auditory-motor synchronization) may be beneficial for sports performance. However, music also possesses motivational qualities that may further enhance performance. Our objective was to examine the relative effects of auditory-motor synchronization and the motivational impact of acoustic stimuli on running performance. To this end, 19 participants ran to exhaustion on a treadmill in 1) a control condition without acoustic stimuli, 2) a metronome condition with a sequence of beeps matching participants' cadence (synchronization), and 3) a music condition with synchronous motivational music matched to participants' cadence (synchronization+motivation). Conditions were counterbalanced and measurements were taken on separate days. As expected, time to exhaustion was significantly longer with acoustic stimuli than without. Unexpectedly, however, time to exhaustion did not differ between metronome and motivational music conditions, despite differences in motivational quality. Motivational music slightly reduced perceived exertion of sub-maximal running intensity and heart rates of (near-)maximal running intensity. The beat of the stimuli -which was most salient during the metronome condition- helped runners to maintain a consistent pace by coupling cadence to the prescribed tempo. Thus, acoustic stimuli may have enhanced running performance because runners worked harder as a result of motivational aspects (most pronounced with motivational music) and more efficiently as a result of auditory-motor synchronization (most notable with metronome beeps). These findings imply that running to motivational music with a very prominent and consistent beat matched to the runner's cadence will likely yield optimal effects because it helps to elevate physiological effort at a high perceived exertion, whereas the consistent and correct cadence induced by auditory-motor synchronization helps to optimize running economy.
Zhao, Zhenling; Liu, Yongchun; Ma, Lanlan; Sato, Yu; Qin, Ling
2015-01-01
Although neural responses to sound stimuli have been thoroughly investigated in various areas of the auditory cortex, the results electrophysiological recordings cannot establish a causal link between neural activation and brain function. Electrical microstimulation, which can selectively perturb neural activity in specific parts of the nervous system, is an important tool for exploring the organization and function of brain circuitry. To date, the studies describing the behavioral effects of electrical stimulation have largely been conducted in the primary auditory cortex. In this study, to investigate the potential differences in the effects of electrical stimulation on different cortical areas, we measured the behavioral performance of cats in detecting intra-cortical microstimulation (ICMS) delivered in the primary and secondary auditory fields (A1 and A2, respectively). After being trained to perform a Go/No-Go task cued by sounds, we found that cats could also learn to perform the task cued by ICMS; furthermore, the detection of the ICMS was similarly sensitive in A1 and A2. Presenting wideband noise together with ICMS substantially decreased the performance of cats in detecting ICMS in A1 and A2, consistent with a noise masking effect on the sensation elicited by the ICMS. In contrast, presenting ICMS with pure-tones in the spectral receptive field of the electrode-implanted cortical site reduced ICMS detection performance in A1 but not A2. Therefore, activation of A1 and A2 neurons may produce different qualities of sensation. Overall, our study revealed that ICMS-induced neural activity could be easily integrated into an animal’s behavioral decision process and had an implication for the development of cortical auditory prosthetics. PMID:25964744
Zhao, Zhenling; Liu, Yongchun; Ma, Lanlan; Sato, Yu; Qin, Ling
2015-01-01
Although neural responses to sound stimuli have been thoroughly investigated in various areas of the auditory cortex, the results electrophysiological recordings cannot establish a causal link between neural activation and brain function. Electrical microstimulation, which can selectively perturb neural activity in specific parts of the nervous system, is an important tool for exploring the organization and function of brain circuitry. To date, the studies describing the behavioral effects of electrical stimulation have largely been conducted in the primary auditory cortex. In this study, to investigate the potential differences in the effects of electrical stimulation on different cortical areas, we measured the behavioral performance of cats in detecting intra-cortical microstimulation (ICMS) delivered in the primary and secondary auditory fields (A1 and A2, respectively). After being trained to perform a Go/No-Go task cued by sounds, we found that cats could also learn to perform the task cued by ICMS; furthermore, the detection of the ICMS was similarly sensitive in A1 and A2. Presenting wideband noise together with ICMS substantially decreased the performance of cats in detecting ICMS in A1 and A2, consistent with a noise masking effect on the sensation elicited by the ICMS. In contrast, presenting ICMS with pure-tones in the spectral receptive field of the electrode-implanted cortical site reduced ICMS detection performance in A1 but not A2. Therefore, activation of A1 and A2 neurons may produce different qualities of sensation. Overall, our study revealed that ICMS-induced neural activity could be easily integrated into an animal's behavioral decision process and had an implication for the development of cortical auditory prosthetics.
Hanley, J Richard; Dell, Gary S; Kay, Janice; Baron, Rachel
2004-03-01
In this paper, we attempt to simulate the picture naming and auditory repetition performance of two patients reported by Hanley, Kay, and Edwards (2002), who were matched for picture naming score but who differed significantly in their ability to repeat familiar words. In Experiment 1, we demonstrate that the model of naming and repetition put forward by Foygel and Dell (2000) is better able to accommodate this pattern of performance than the model put forward by Dell, Schwartz, Martin, Saffran, and Gagnon (1997). Nevertheless, Foygel and Dell's model underpredicted the repetition performance of both patients. In Experiment 2, we attempt to simulate their performance using a new dual route model of repetition in which Foygel and Dell's model is augmented by an additional nonlexical repetition pathway. The new model provided a more accurate fit to the real-word repetition performance of both patients. It is argued that the results provide support for dual route models of auditory repetition.
Abikoff, H; Courtney, M E; Szeibel, P J; Koplewicz, H S
1996-05-01
This study evaluated the impact of extra-task stimulation on the academic task performance of children with attention-deficit/hyperactivity disorder (ADHD). Twenty boys with ADHD and 20 nondisabled boys worked on an arithmetic task during high stimulation (music), low stimulation (speech), and no stimulation (silence). The music "distractors" were individualized for each child, and the arithmetic problems were at each child's ability level. A significant Group x Condition interaction was found for number of correct answers. Specifically, the nondisabled youngsters performed similarly under all three auditory conditions. In contrast, the children with ADHD did significantly better under the music condition than speech or silence conditions. However, a significant Group x Order interaction indicated that arithmetic performance was enhanced only for those children with ADHD who received music as the first condition. The facilitative effects of salient auditory stimulation on the arithmetic performance of the children with ADHD provide some support for the underarousal/optimal stimulation theory of ADHD.
Cortical systems associated with covert music rehearsal.
Langheim, Frederick J P; Callicott, Joseph H; Mattay, Venkata S; Duyn, Jeff H; Weinberger, Daniel R
2002-08-01
Musical representation and overt music production are necessarily complex cognitive phenomena. While overt musical performance may be observed and studied, the act of performance itself necessarily skews results toward the importance of primary sensorimotor and auditory cortices. However, imagined musical performance (IMP) represents a complex behavioral task involving components suited to exploring the physiological underpinnings of musical cognition in music performance without the sensorimotor and auditory confounds of overt performance. We mapped the blood oxygenation level-dependent fMRI activation response associated with IMP in experienced musicians independent of the piece imagined. IMP consistently activated supplementary motor and premotor areas, right superior parietal lobule, right inferior frontal gyrus, bilateral mid-frontal gyri, and bilateral lateral cerebellum in contrast with rest, in a manner distinct from fingertapping versus rest and passive listening to the same piece versus rest. These data implicate an associative network independent of primary sensorimotor and auditory activity, likely representing the cortical elements most intimately linked to music production.
Effects of visual working memory on brain information processing of irrelevant auditory stimuli.
Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye
2014-01-01
Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.
Influence of signal processing strategy in auditory abilities.
Melo, Tatiana Mendes de; Bevilacqua, Maria Cecília; Costa, Orozimbo Alves; Moret, Adriane Lima Mortari
2013-01-01
The signal processing strategy is a parameter that may influence the auditory performance of cochlear implant and is important to optimize this parameter to provide better speech perception, especially in difficult listening situations. To evaluate the individual's auditory performance using two different signal processing strategy. Prospective study with 11 prelingually deafened children with open-set speech recognition. A within-subjects design was used to compare performance with standard HiRes and HiRes 120 in three different moments. During test sessions, subject's performance was evaluated by warble-tone sound-field thresholds, speech perception evaluation, in quiet and in noise. In the silence, children S1, S4, S5, S7 showed better performance with the HiRes 120 strategy and children S2, S9, S11 showed better performance with the HiRes strategy. In the noise was also observed that some children performed better using the HiRes 120 strategy and other with HiRes. Not all children presented the same pattern of response to the different strategies used in this study, which reinforces the need to look at optimizing cochlear implant clinical programming.
Wong, Vincent Wai-Sun; Petta, Salvatore; Hiriart, Jean-Baptiste; Cammà, Calogero; Wong, Grace Lai-Hung; Marra, Fabio; Vergniol, Julien; Chan, Anthony Wing-Hung; Tuttolomondo, Antonino; Merrouche, Wassil; Chan, Henry Lik-Yuen; Le Bail, Brigitte; Arena, Umberto; Craxì, Antonio; de Lédinghen, Victor
2017-09-01
Controlled attenuation parameter (CAP) can be performed together with liver stiffness measurement (LSM) by transient elastography (TE) and is often used to diagnose fatty liver. We aimed to define the validity criteria of CAP. CAP was measured by the M probe prior to liver biopsy in 754 consecutive patients with different liver diseases at three centers in Europe and Hong Kong (derivation cohort, n=340; validation cohort, n=414; 101 chronic hepatitis B, 154 chronic hepatitis C, 349 non-alcoholic fatty liver disease, 37 autoimmune hepatitis, 49 cholestatic liver disease, 64 others; 277 F3-4; age 52±14; body mass index 27.2±5.3kg/m 2 ). The primary outcome was the diagnosis of fatty liver, defined as steatosis involving ≥5% of hepatocytes. The area under the receiver-operating characteristics curve (AUROC) for CAP diagnosis of fatty liver was 0.85 (95% CI 0.82-0.88). The interquartile range (IQR) of CAP had a negative correlation with CAP (r=-0.32, p<0.001), suggesting the IQR-to-median ratio of CAP would be an inappropriate validity parameter. In the derivation cohort, the IQR of CAP was associated with the accuracy of CAP (AUROC 0.86, 0.89 and 0.76 in patients with IQR of CAP <20 [15% of patients], 20-39 [51%], and ≥40dB/m [33%], respectively). Likewise, the AUROC of CAP in the validation cohort was 0.90 and 0.77 in patients with IQR of CAP <40 and ≥40dB/m, respectively (p=0.004). The accuracy of CAP in detecting grade 2 and 3 steatosis was lower among patients with body mass index ≥30kg/m 2 and F3-4 fibrosis. The validity of CAP for the diagnosis of fatty liver is lower if the IQR of CAP is ≥40dB/m. Lay summary: Controlled attenuation parameter (CAP) is measured by transient elastography (TE) for the detection of fatty liver. In this large study, using liver biopsy as a reference, we show that the variability of CAP measurements based on its interquartile range can reflect the accuracy of fatty liver diagnosis. In contrast, other clinical factors such as adiposity and liver enzyme levels do not affect the performance of CAP. Copyright © 2017 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Neural dynamics underlying attentional orienting to auditory representations in short-term memory.
Backer, Kristina C; Binns, Malcolm A; Alain, Claude
2015-01-21
Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics. Copyright © 2015 the authors 0270-6474/15/351307-12$15.00/0.
Evans, Julia L.; Pollak, Seth D.
2011-01-01
This study examined the electrophysiological correlates of auditory and visual working memory in children with Specific Language Impairments (SLI). Children with SLI and age-matched controls (11;9 – 14;10) completed visual and auditory working memory tasks while event-related potentials (ERPs) were recorded. In the auditory condition, children with SLI performed similarly to controls when the memory load was kept low (1-back memory load). As expected, when demands for auditory working memory were higher, children with SLI showed decreases in accuracy and attenuated P3b responses. However, children with SLI also evinced difficulties in the visual working memory tasks. In both the low (1-back) and high (2-back) memory load conditions, P3b amplitude was significantly lower for the SLI as compared to CA groups. These data suggest a domain-general working memory deficit in SLI that is manifested across auditory and visual modalities. PMID:21316354
Richardson, Fiona M; Ramsden, Sue; Ellis, Caroline; Burnett, Stephanie; Megnin, Odette; Catmur, Caroline; Schofield, Tom M; Leff, Alex P; Price, Cathy J
2011-12-01
A central feature of auditory STM is its item-limited processing capacity. We investigated whether auditory STM capacity correlated with regional gray and white matter in the structural MRI images from 74 healthy adults, 40 of whom had a prior diagnosis of developmental dyslexia whereas 34 had no history of any cognitive impairment. Using whole-brain statistics, we identified a region in the left posterior STS where gray matter density was positively correlated with forward digit span, backward digit span, and performance on a "spoonerisms" task that required both auditory STM and phoneme manipulation. Across tasks and participant groups, the correlation was highly significant even when variance related to reading and auditory nonword repetition was factored out. Although the dyslexics had poorer phonological skills, the effect of auditory STM capacity in the left STS was the same as in the cognitively normal group. We also illustrate that the anatomical location of this effect is in proximity to a lesion site recently associated with reduced auditory STM capacity in patients with stroke damage. This result, therefore, indicates that gray matter density in the posterior STS predicts auditory STM capacity in the healthy and damaged brain. In conclusion, we suggest that our present findings are consistent with the view that there is an overlap between the mechanisms that support language processing and auditory STM.
Strait, Dana L.; Kraus, Nina
2011-01-01
Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments. PMID:21716636
Caruso, Valeria C; Pages, Daniel S; Sommer, Marc A; Groh, Jennifer M
2016-06-01
Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway. Copyright © 2016 the American Physiological Society.
The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.
Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte
2016-02-03
Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.
Usage of drip drops as stimuli in an auditory P300 BCI paradigm.
Huang, Minqiang; Jin, Jing; Zhang, Yu; Hu, Dewen; Wang, Xingyu
2018-02-01
Recently, many auditory BCIs are using beeps as auditory stimuli, while beeps sound unnatural and unpleasant for some people. It is proved that natural sounds make people feel comfortable, decrease fatigue, and improve the performance of auditory BCI systems. Drip drop is a kind of natural sounds that makes humans feel relaxed and comfortable. In this work, three kinds of drip drops were used as stimuli in an auditory-based BCI system to improve the user-friendness of the system. This study explored whether drip drops could be used as stimuli in the auditory BCI system. The auditory BCI paradigm with drip-drop stimuli, which was called the drip-drop paradigm (DP), was compared with the auditory paradigm with beep stimuli, also known as the beep paradigm (BP), in items of event-related potential amplitudes, online accuracies and scores on the likability and difficulty to demonstrate the advantages of DP. DP obtained significantly higher online accuracy and information transfer rate than the BP ( p < 0.05, Wilcoxon signed test; p < 0.05, Wilcoxon signed test). Besides, DP obtained higher scores on the likability with no significant difference on the difficulty ( p < 0.05, Wilcoxon signed test). The results showed that the drip drops were reliable acoustic materials as stimuli in an auditory BCI system.
Kornysheva, Katja; Schubotz, Ricarda I.
2011-01-01
Integrating auditory and motor information often requires precise timing as in speech and music. In humans, the position of the ventral premotor cortex (PMv) in the dorsal auditory stream renders this area a node for auditory-motor integration. Yet, it remains unknown whether the PMv is critical for auditory-motor timing and which activity increases help to preserve task performance following its disruption. 16 healthy volunteers participated in two sessions with fMRI measured at baseline and following rTMS (rTMS) of either the left PMv or a control region. Subjects synchronized left or right finger tapping to sub-second beat rates of auditory rhythms in the experimental task, and produced self-paced tapping during spectrally matched auditory stimuli in the control task. Left PMv rTMS impaired auditory-motor synchronization accuracy in the first sub-block following stimulation (p<0.01, Bonferroni corrected), but spared motor timing and attention to task. Task-related activity increased in the homologue right PMv, but did not predict the behavioral effect of rTMS. In contrast, anterior midline cerebellum revealed most pronounced activity increase in less impaired subjects. The present findings suggest a critical role of the left PMv in feed-forward computations enabling accurate auditory-motor timing, which can be compensated by activity modulations in the cerebellum, but not in the homologue region contralateral to stimulation. PMID:21738657
Psychoacoustic and cognitive aspects of auditory roughness: definitions, models, and applications
NASA Astrophysics Data System (ADS)
Vassilakis, Pantelis N.; Kendall, Roger A.
2010-02-01
The term "auditory roughness" was first introduced in the 19th century to describe the buzzing, rattling auditory sensation accompanying narrow harmonic intervals (i.e. two tones with frequency difference in the range of ~15-150Hz, presented simultaneously). A broader definition and an overview of the psychoacoustic correlates of the auditory roughness sensation, also referred to as sensory dissonance, is followed by an examination of efforts to quantify it over the past one hundred and fifty years and leads to the introduction of a new roughness calculation model and an application that automates spectral and roughness analysis of sound signals. Implementation of spectral and roughness analysis is briefly discussed in the context of two pilot perceptual experiments, designed to assess the relationship among cultural background, music performance practice, and aesthetic attitudes towards the auditory roughness sensation.
Lu, Xi; Siu, Ka-Chun; Fu, Siu N; Hui-Chan, Christina W Y; Tsang, William W N
2013-08-01
To compare the performance of older experienced Tai Chi practitioners and healthy controls in dual-task versus single-task paradigms, namely stepping down with and without performing an auditory response task, a cross-sectional study was conducted in the Center for East-meets-West in Rehabilitation Sciences at The Hong Kong Polytechnic University, Hong Kong. Twenty-eight Tai Chi practitioners (73.6 ± 4.2 years) and 30 healthy control subjects (72.4 ± 6.1 years) were recruited. Participants were asked to step down from a 19-cm-high platform and maintain a single-leg stance for 10 s with and without a concurrent cognitive task. The cognitive task was an auditory Stroop test in which the participants were required to respond to different tones of voices regardless of their word meanings. Postural stability after stepping down under single- and dual-task paradigms, in terms of excursion of the subject's center of pressure (COP) and cognitive performance, was measured for comparison between the two groups. Our findings demonstrated significant between-group differences in more outcome measures during dual-task than single-task performance. Thus, the auditory Stroop test showed that Tai Chi practitioners achieved not only significantly less error rate in single-task, but also significantly faster reaction time in dual-task, when compared with healthy controls similar in age and other relevant demographics. Similarly, the stepping-down task showed that Tai Chi practitioners not only displayed significantly less COP sway area in single-task, but also significantly less COP sway path than healthy controls in dual-task. These results showed that Tai Chi practitioners achieved better postural stability after stepping down as well as better performance in auditory response task than healthy controls. The improved performance that was magnified by dual motor-cognitive task performance may point to the benefits of Tai Chi being a mind-and-body exercise.
Lu, Sara A; Wickens, Christopher D; Prinet, Julie C; Hutchins, Shaun D; Sarter, Nadine; Sebok, Angelia
2013-08-01
The aim of this study was to integrate empirical data showing the effects of interrupting task modality on the performance of an ongoing visual-manual task and the interrupting task itself. The goal is to support interruption management and the design of multimodal interfaces. Multimodal interfaces have been proposed as a promising means to support interruption management.To ensure the effectiveness of this approach, their design needs to be based on an analysis of empirical data concerning the effectiveness of individual and redundant channels of information presentation. Three meta-analyses were conducted to contrast performance on an ongoing visual task and interrupting tasks as a function of interrupting task modality (auditory vs. tactile, auditory vs. visual, and single modality vs. redundant auditory-visual). In total, 68 studies were included and six moderator variables were considered. The main findings from the meta-analyses are that response times are faster for tactile interrupting tasks in case of low-urgency messages.Accuracy is higher with tactile interrupting tasks for low-complexity signals but higher with auditory interrupting tasks for high-complexity signals. Redundant auditory-visual combinations are preferable for communication tasks during high workload and with a small visual angle of separation. The three meta-analyses contribute to the knowledge base in multimodal information processing and design. They highlight the importance of moderator variables in predicting the effects of interruption task modality on ongoing and interrupting task performance. The findings from this research will help inform the design of multimodal interfaces in data-rich, event-driven domains.
Białuńska, Anita; Salvatore, Anthony P
2017-12-01
Although science findings and treatment approaches of a concussion have changed in recent years, there continue to be challenges in understanding the nature of the post-concussion behavior. There is growing a body of evidence that some deficits can be related to an impaired auditory processing. To assess auditory comprehension changes over time following sport-related concussion (SRC) in young athletes. A prospective, repeated measures mixed-design was used. A sample of concussed athletes ( n = 137) and the control group consisted of age-matched, non-concussed athletes ( n = 143) were administered Subtest VIII of the Computerized-Revised Token Test (C-RTT). The 88 concussed athletes selected for final analysis (neither previous history of brain injury, neurological, psychiatric problems, nor auditory deficits) were evaluated after injury during three sessions (PC1, PC2, and PC3); controls were tested once. Between- and within-group comparisons using RMANOVA were performed on the C-RTT Efficiency Score (ES). ES of the SRC athletes group improved over consecutive testing sessions ( F = 14.7, p < .001), while post-hoc analysis showed that PC1 results differed from PC2 and PC3 ( ts ≥ 4.0, ps < .001), but PC2 and PC3 C-RTT ES did not change statistically ( t = 0.6, p = .557). The SRC athletes demonstrated lower ES for all test session when compared to the control group ( ts > 2.0, Ps <.01). Dysfunctional auditory comprehension performance following a concussion improved over time, but after the second testing session improved performance slowed, especially in terms of its timing. Yet, not only auditory processing but also sensorimotor integration and/or motor execution can be compromised after a concussion.
Auditory Speech Perception Development in Relation to Patient's Age with Cochlear Implant
Ciscare, Grace Kelly Seixas; Mantello, Erika Barioni; Fortunato-Queiroz, Carla Aparecida Urzedo; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa dos
2017-01-01
Introduction A cochlear implant in adolescent patients with pre-lingual deafness is still a debatable issue. Objective The objective of this study is to analyze and compare the development of auditory speech perception in children with pre-lingual auditory impairment submitted to cochlear implant, in different age groups in the first year after implantation. Method This is a retrospective study, documentary research, in which we analyzed 78 reports of children with severe bilateral sensorineural hearing loss, unilateral cochlear implant users of both sexes. They were divided into three groups: G1, 22 infants aged less than 42 months; G2, 28 infants aged between 43 to 83 months; and G3, 28 older than 84 months. We collected medical record data to characterize the patients, auditory thresholds with cochlear implants, assessment of speech perception, and auditory skills. Results There was no statistical difference in the association of the results among groups G1, G2, and G3 with sex, caregiver education level, city of residence, and speech perception level. There was a moderate correlation between age and hearing aid use time, age and cochlear implants use time. There was a strong correlation between age and the age cochlear implants was performed, hearing aid use time and age CI was performed. Conclusion There was no statistical difference in the speech perception in relation to the patient's age when cochlear implant was performed. There were statistically significant differences for the variables of auditory deprivation time between G3 - G1 and G2 - G1 and hearing aid use time between G3 - G2 and G3 - G1. PMID:28680487
1994-07-01
psychological refractory period 15. Two-flash threshold 16. Critical flicker fusion (CFF) 17. Steady state visually evoked response 18. Auditory brain stem...States of awareness I: Subliminal erceoption relationships to situational awareness (AL-TR-1992-0085). Brooks Air Force BaSe, TX: Armstrong...the signals required different inputs (e.g., visual versus auditory ) (Colley & Beech, 1989). Despite support of this theory from such experiments
Rand, Kristina M.; Creem-Regehr, Sarah H.; Thompson, William B.
2015-01-01
The ability to navigate without getting lost is an important aspect of quality of life. In five studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low-vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2 participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared to degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared to degraded vision. With degraded vision, auditory task performance was better when guided compared to unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing. PMID:25706766
Visual and auditory steady-state responses in attention-deficit/hyperactivity disorder.
Khaleghi, Ali; Zarafshan, Hadi; Mohammadi, Mohammad Reza
2018-05-22
We designed a study to investigate the patterns of the steady-state visual evoked potential (SSVEP) and auditory steady-state response (ASSR) in adolescents with attention-deficit/hyperactivity disorder (ADHD) when performing a motor response inhibition task. Thirty 12- to 18-year-old adolescents with ADHD and 30 healthy control adolescents underwent an electroencephalogram (EEG) examination during steady-state stimuli when performing a stop-signal task. Then, we calculated the amplitude and phase of the steady-state responses in both visual and auditory modalities. Results showed that adolescents with ADHD had a significantly poorer performance in the stop-signal task during both visual and auditory stimuli. The SSVEP amplitude of the ADHD group was larger than that of the healthy control group in most regions of the brain, whereas the ASSR amplitude of the ADHD group was smaller than that of the healthy control group in some brain regions (e.g., right hemisphere). In conclusion, poorer task performance (especially inattention) and neurophysiological results in ADHD demonstrate a possible impairment in the interconnection of the association cortices in the parietal and temporal lobes and the prefrontal cortex. Also, the motor control problems in ADHD may arise from neural deficits in the frontoparietal and occipitoparietal systems and other brain structures such as cerebellum.
Conceptual Modeling Framework for E-Area PA HELP Infiltration Model Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyer, J. A.
A conceptual modeling framework based on the proposed E-Area Low-Level Waste Facility (LLWF) closure cap design is presented for conducting Hydrologic Evaluation of Landfill Performance (HELP) model simulations of intact and subsided cap infiltration scenarios for the next E-Area Performance Assessment (PA).
Seydell-Greenwald, Anna; Raven, Erika P.; Leaver, Amber M.; Turesky, Ted K.; Rauschecker, Josef P.
2014-01-01
Subjective tinnitus, or “ringing in the ears,” is perceived by 10 to 15 percent of the adult population and causes significant suffering in a subset of patients. While it was originally thought of as a purely auditory phenomenon, there is increasing evidence that the limbic system influences whether and how tinnitus is perceived, far beyond merely determining the patient's emotional reaction to the phantom sound. Based on functional imaging and electrophysiological data, recent articles frame tinnitus as a “network problem” arising from abnormalities in auditory-limbic interactions. Diffusion-weighted magnetic resonance imaging is a noninvasive method for investigating anatomical connections in vivo. It thus has the potential to provide anatomical evidence for the proposed changes in auditory-limbic connectivity. However, the few diffusion imaging studies of tinnitus performed to date have inconsistent results. In the present paper, we briefly summarize the results of previous studies, aiming to reconcile their results. After detailing analysis methods, we then report findings from a new dataset. We conclude that while there is some evidence for tinnitus-related increases in auditory and auditory-limbic connectivity that counteract hearing-loss related decreases in auditory connectivity, these results should be considered preliminary until several technical challenges have been overcome. PMID:25050181
Black, Emily; Stevenson, Jennifer L; Bish, Joel P
2017-08-01
The global precedence effect is a phenomenon in which global aspects of visual and auditory stimuli are processed before local aspects. Individuals with musical experience perform better on all aspects of auditory tasks compared with individuals with less musical experience. The hemispheric lateralization of this auditory processing is less well-defined. The present study aimed to replicate the global precedence effect with auditory stimuli and to explore the lateralization of global and local auditory processing in individuals with differing levels of musical experience. A total of 38 college students completed an auditory-directed attention task while electroencephalography was recorded. Individuals with low musical experience responded significantly faster and more accurately in global trials than in local trials regardless of condition, and significantly faster and more accurately when pitches traveled in the same direction (compatible condition) than when pitches traveled in two different directions (incompatible condition) consistent with a global precedence effect. In contrast, individuals with high musical experience showed less of a global precedence effect with regards to accuracy, but not in terms of reaction time, suggesting an increased ability to overcome global bias. Further, a difference in P300 latency between hemispheres was observed. These findings provide a preliminary neurological framework for auditory processing of individuals with differing degrees of musical experience.
Auditory and language development in Mandarin-speaking children after cochlear implantation.
Lu, Xing; Qin, Zhaobing
2018-04-01
To evaluate early auditory performance, speech perception and language skills in Mandarin-speaking prelingual deaf children in the first two years after they received a cochlear implant (CI) and analyse the effects of possible associated factors. The Infant-Toddler Meaningful Auditory Integration Scale (ITMAIS)/Meaningful Auditory Integration Scale (MAIS), Mandarin Early Speech Perception (MESP) test and Putonghua Communicative Development Inventory (PCDI) were used to assess auditory and language outcomes in 132 Mandarin-speaking children pre- and post-implantation. Children with CIs exhibited an ITMAIS/MAIS and PCDI developmental trajectory similar to that of children with normal hearing. The increased number of participants who achieved MESP categories 1-6 at each test interval showed a significant improvement in speech perception by paediatric CI recipients. Age at implantation and socioeconomic status were consistently associated with both auditory and language outcomes in the first two years post-implantation. Mandarin-speaking children with CIs exhibit significant improvements in early auditory and language development. Though these improvements followed the normative developmental trajectories, they still exhibited a gap compared with normative values. Earlier implantation and higher socioeconomic status are consistent predictors of greater auditory and language skills in the early stage. Copyright © 2018 Elsevier B.V. All rights reserved.
Subchronic JP-8 jet fuel exposure enhances vulnerability to noise-induced hearing loss in rats.
Fechter, L D; Fisher, J W; Chapman, G D; Mokashi, V P; Ortiz, P A; Reboulet, J E; Stubbs, J E; Lear, A M; McInturf, S M; Prues, S L; Gearhart, C A; Fulton, S; Mattie, D R
2012-01-01
Both laboratory and epidemiological studies published over the past two decades have identified the risk of excess hearing loss when specific chemical contaminants are present along with noise. The objective of this study was to evaluate the potency of JP-8 jet fuel to enhance noise-induced hearing loss (NIHL) using inhalation exposure to fuel and simultaneous exposure to either continuous or intermittent noise exposure over a 4-wk exposure period using both male and female Fischer 344 rats. In the initial study, male (n = 5) and female (n = 5) rats received inhalation exposure to JP-8 fuel for 6 h/d, 5 d/wk for 4 wk at concentrations of 200, 750, or 1500 mg/m³. Parallel groups of rats also received nondamaging noise (constant octave band noise at 85 dB(lin)) in combination with the fuel, noise alone (75, 85, or 95 dB), or no exposure to fuel or noise. Significant concentration-related impairment of auditory function measured by distortion product otoacoustic emissions (DPOAE) and compound action potential (CAP) threshold was seen in rats exposed to combined JP-8 plus noise exposure when JP-8 levels of 1500 mg/m³ were presented with trends toward impairment seen with 750 mg/m³ JP-8 + noise. JP-8 alone exerted no significant effect on auditory function. In addition, noise was able to disrupt the DPOAE and increase auditory thresholds only when noise exposure was at 95 dB. In a subsequent study, male (n = 5 per group) and female (n = 5 per group) rats received 1000 mg/m³ JP-8 for 6 h/d, 5 d/wk for 4 wk with and without exposure to 102 dB octave band noise that was present for 15 min out of each hour (total noise duration 90 min). Comparisons were made to rats receiving only noise, and thosereceiving no experimental treatment. Significant impairment of auditory thresholds especially for high-frequency tones was identified in the male rats receiving combined treatment. This study provides a basis for estimating excessive hearing loss under conditions of subchronic JP-8 jet fuel exposure.
NASA Technical Reports Server (NTRS)
Begault, Durand R.
2018-01-01
This document reviews non-auditory effects of noise relevant to habitable volume requirements in cislunar space. The non-auditory effects of noise in future long-term space habitats are likely to be impactful on team and individual performance, sleep, and cognitive well-being. This report has provided several recommendations for future standards and procedures for long-term space flight habitats, along with recommendations for NASA's Human Research Program in support of DST mission success.
[Clinical evaluation of bedridden patients with pneumonia receiving home health care].
Fukuyama, Hajime; Ishida, Tadashi; Tachibana, Hiromasa; Iga, Chiya; Nakagawa, Hiroaki; Ito, Akihiro; Ubukata, Satoshi; Yoshioka, Hiroshige; Arita, Machiko; Hashimoto, Toru
2010-12-01
Pneumonia which develops in patients while living in their own home is categorized as community-acquired pneumonia (CAP), even if these patients are bedridden and receiving home health care. However, because of the differences in patient backgrounds, we speculated that the clinical outcomes and pathogens of bedridden patients with pneumonia who are receiving home health care would be different from those of CAP. We conducted a prospective study of patients with CAP who were hospitalized at our hospital from April 2007 through September 2009. We compared home health care bedridden pneumonia (performance status 4, PS4-CAP) with non-PS4-CAP in a total of 505 enrolled patients in this study. Among these, 66 had PS4-CAP, mostly associated with aspiration. Severity scores, mortality rate, recurrence rate and length of hospital stay of those with PS4-CAP were significantly higher than those with non-PS4-CAP. Drug resistant pathogens were more frequently isolated from patients with PS4-CAP than from those of non-PS4-CAP. The results of patients with PS4-CAP were in agreement with those of previous health care-associated pneumonia (HCAP) reports. The present study suggested home health care bedridden pneumonia should be categorized as HCAP, not CAP.
A Behavioral Study of Distraction by Vibrotactile Novelty
ERIC Educational Resources Information Center
Parmentier, Fabrice B. R.; Ljungberg, Jessica K.; Elsley, Jane V.; Lindkvist, Markus
2011-01-01
Past research has demonstrated that the occurrence of unexpected task-irrelevant changes in the auditory or visual sensory channels captured attention in an obligatory fashion, hindering behavioral performance in ongoing auditory or visual categorization tasks and generating orientation and re-orientation electrophysiological responses. We report…
How musical expertise shapes speech perception: evidence from auditory classification images.
Varnet, Léo; Wang, Tianyun; Peter, Chloe; Meunier, Fanny; Hoen, Michel
2015-09-24
It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians' higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.
Aroudi, Ali; Doclo, Simon
2017-07-01
To decode auditory attention from single-trial EEG recordings in an acoustic scenario with two competing speakers, a least-squares method has been recently proposed. This method however requires the clean speech signals of both the attended and the unattended speaker to be available as reference signals. Since in practice only the binaural signals consisting of a reverberant mixture of both speakers and background noise are available, in this paper we explore the potential of using these (unprocessed) signals as reference signals for decoding auditory attention in different acoustic conditions (anechoic, reverberant, noisy, and reverberant-noisy). In addition, we investigate whether it is possible to use these signals instead of the clean attended speech signal for filter training. The experimental results show that using the unprocessed binaural signals for filter training and for decoding auditory attention is feasible with a relatively large decoding performance, although for most acoustic conditions the decoding performance is significantly lower than when using the clean speech signals.
Neural correlates of auditory short-term memory in rostral superior temporal cortex.
Scott, Brian H; Mishkin, Mortimer; Yin, Pingbo
2014-12-01
Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or during both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing and in their resistance to sounds intervening between the sample and match. Like the monkeys' behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM. Copyright © 2014 Elsevier Ltd. All rights reserved.
30 CFR 250.124 - Will BSEE approve gas injection into the cap rock containing a sulphur deposit?
Code of Federal Regulations, 2012 CFR
2012-07-01
... rock containing a sulphur deposit? 250.124 Section 250.124 Mineral Resources BUREAU OF SAFETY AND... CONTINENTAL SHELF General Performance Standards § 250.124 Will BSEE approve gas injection into the cap rock containing a sulphur deposit? To receive the Regional Supervisor's approval to inject gas into the cap rock...
30 CFR 250.124 - Will BSEE approve gas injection into the cap rock containing a sulphur deposit?
Code of Federal Regulations, 2013 CFR
2013-07-01
... rock containing a sulphur deposit? 250.124 Section 250.124 Mineral Resources BUREAU OF SAFETY AND... CONTINENTAL SHELF General Performance Standards § 250.124 Will BSEE approve gas injection into the cap rock containing a sulphur deposit? To receive the Regional Supervisor's approval to inject gas into the cap rock...
30 CFR 250.124 - Will BSEE approve gas injection into the cap rock containing a sulphur deposit?
Code of Federal Regulations, 2014 CFR
2014-07-01
... rock containing a sulphur deposit? 250.124 Section 250.124 Mineral Resources BUREAU OF SAFETY AND... CONTINENTAL SHELF General Performance Standards § 250.124 Will BSEE approve gas injection into the cap rock containing a sulphur deposit? To receive the Regional Supervisor's approval to inject gas into the cap rock...
NASA Astrophysics Data System (ADS)
Lee, J. M.; Lee, J. I.; Lim, Y. J.
2010-03-01
The aim of the present study was to investigate surface characteristics in four different titanium surfaces (AN: anodized at 270 V; AN-CaP: anodic oxidation and CaP deposited; SLA: sandblasted and acid etched; MA: machined) and to evaluate biological behaviors such as cell adhesion, cell proliferation, cytoskeletal organization, and osteogenic protein expression of MG63 osteoblast-like cells at the early stage. Surface analysis was performed using scanning electron microscopy, thin-film X-ray diffractometry, and a confocal laser scanning microscope. In order to evaluate cellular responses, MG63 osteoblast-like cells were used. The cell viability was evaluated by MTT assay. Immunofluorescent analyses of actin, type I collagen, osteonectin and osteocalcin were performed. The anodized and CaP deposited specimen showed homogeneously distributed CaP particles around micropores and exhibited anatase type oxides, titanium, and HA crystalline structures. This experiment suggests that CaP particles on the anodic oxidation surface affect cellular attachment and spreading. When designing an in vitro biological study for CaP coated titanium, it must be taken into account that preincubation in medium prior to cell seeding and the cell culture medium may affect the CaP coatings. All these observations illustrate the importance of the experimental conditions and the physicochemical parameters of the CaP coating. It is considered that further evaluations such as long-term in vitro cellular assays and in vivo experiments should be necessary to figure out the effect of CaP deposition to biological responses.
Abdollahi fakhim, Shahin; Naderpoor, Masoud; Mousaviagdas, Mehrnoosh
2014-01-01
Introduction: First branchial cleft anomalies manifest with duplication of the external auditory canal. Case Report: This report features a rare case of microtia and congenital middle ear and canal cholesteatoma with first branchial fistula. External auditory canal stenosis was complicated by middle ear and external canal cholesteatoma, but branchial fistula, opening in the zygomatic root and a sinus in the helical root, may explain this feature. A canal wall down mastoidectomy with canaloplasty and wide meatoplasty was performed. The branchial cleft was excised through parotidectomy and facial nerve dissection. Conclusion: It should be considered that canal stenosis in such cases can induce cholesteatoma formation in the auditory canal and middle ear. PMID:25320705
Abdollahi Fakhim, Shahin; Naderpoor, Masoud; Mousaviagdas, Mehrnoosh
2014-10-01
First branchial cleft anomalies manifest with duplication of the external auditory canal. This report features a rare case of microtia and congenital middle ear and canal cholesteatoma with first branchial fistula. External auditory canal stenosis was complicated by middle ear and external canal cholesteatoma, but branchial fistula, opening in the zygomatic root and a sinus in the helical root, may explain this feature. A canal wall down mastoidectomy with canaloplasty and wide meatoplasty was performed. The branchial cleft was excised through parotidectomy and facial nerve dissection. It should be considered that canal stenosis in such cases can induce cholesteatoma formation in the auditory canal and middle ear.
Neurological outcome of patients with cryopyrin-associated periodic syndrome (CAPS).
Mamoudjy, Nafissa; Maurey, Hélène; Marie, Isabelle; Koné-Paut, Isabelle; Deiva, Kumaran
2017-02-14
To assess the neurological involvement and outcome, including school and professional performances, of adults and children with cryopyrin-associated periodic syndrome (CAPS). In this observational study, patients with genetically proven CAPS and followed in the national referral centre for autoinflammatory diseases at Bicêtre hospital were assessed. Neurological manifestations, CSF data and MRI results at diagnosis and during follow-up were analyzed. Twenty-four patients (15 adults and 9 children at diagnosis) with CAPS were included. The median age at disease onset was 0 year (birth) [range 0-14], the median age at diagnosis was 20 years [range 0-53] and the mean duration of follow-up was 10.4 ± 2 years. Neurological involvement at diagnosis, mostly headaches and hearing loss, was noted in 17 patients (71%). Two patients of the same family had abnormal brain MRI. A439V mutation is frequently associated with a non-neurological phenotype while R260W mutation tends to be associated with neurological involvement. Eleven adult patients (61%) and 3 children (50%) underwent school difficulties. Neurological involvement is frequent in patients with CAPS and the majority of patients presented difficulties in school performances with consequences in the professional outcome during adulthood. Further studies in larger cohorts of children with CAPS focusing in intellectual efficiency and school performances are necessary.
2017-01-01
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238
Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L
2017-12-13
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.
Kantrowitz, J T; Hoptman, M J; Leitman, D I; Silipo, G; Javitt, D C
2014-01-01
Intact sarcasm perception is a crucial component of social cognition and mentalizing (the ability to understand the mental state of oneself and others). In sarcasm, tone of voice is used to negate the literal meaning of an utterance. In particular, changes in pitch are used to distinguish between sincere and sarcastic utterances. Schizophrenia patients show well-replicated deficits in auditory function and functional connectivity (FC) within and between auditory cortical regions. In this study we investigated the contributions of auditory deficits to sarcasm perception in schizophrenia. Auditory measures including pitch processing, auditory emotion recognition (AER) and sarcasm detection were obtained from 76 patients with schizophrenia/schizo-affective disorder and 72 controls. Resting-state FC (rsFC) was obtained from a subsample and was analyzed using seeds placed in both auditory cortex and meta-analysis-defined core-mentalizing regions relative to auditory performance. Patients showed large effect-size deficits across auditory measures. Sarcasm deficits correlated significantly with general functioning and impaired pitch processing both across groups and within the patient group alone. Patients also showed reduced sensitivity to alterations in mean pitch and variability. For patients, sarcasm discrimination correlated exclusively with the level of rsFC within primary auditory regions whereas for controls, correlations were observed exclusively within core-mentalizing regions (the right posterior superior temporal gyrus, anterior superior temporal sulcus and insula, and left posterior medial temporal gyrus). These findings confirm the contribution of auditory deficits to theory of mind (ToM) impairments in schizophrenia, and demonstrate that FC within auditory, but not core-mentalizing, regions is rate limiting with respect to sarcasm detection in schizophrenia.
Romero, Ana Carla Leite; Alfaya, Lívia Marangoni; Gonçales, Alina Sanches; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima
2016-01-01
Introduction The auditory system of HIV-positive children may have deficits at various levels, such as the high incidence of problems in the middle ear that can cause hearing loss. Objective The objective of this study is to characterize the development of children infected by the Human Immunodeficiency Virus (HIV) in the Simplified Auditory Processing Test (SAPT) and the Staggered Spondaic Word Test. Methods We performed behavioral tests composed of the Simplified Auditory Processing Test and the Portuguese version of the Staggered Spondaic Word Test (SSW). The participants were 15 children infected by HIV, all using antiretroviral medication. Results The children had abnormal auditory processing verified by Simplified Auditory Processing Test and the Portuguese version of SSW. In the Simplified Auditory Processing Test, 60% of the children presented hearing impairment. In the SAPT, the memory test for verbal sounds showed more errors (53.33%); whereas in SSW, 86.67% of the children showed deficiencies indicating deficit in figure-ground, attention, and memory auditory skills. Furthermore, there are more errors in conditions of background noise in both age groups, where most errors were in the left ear in the Group of 8-year-olds, with similar results for the group aged 9 years. Conclusion The high incidence of hearing loss in children with HIV and comorbidity with several biological and environmental factors indicate the need for: 1) familiar and professional awareness of the impact on auditory alteration on the developing and learning of the children with HIV, and 2) access to educational plans and follow-up with multidisciplinary teams as early as possible to minimize the damage caused by auditory deficits. PMID:28050213
Differential cognitive and perceptual correlates of print reading versus braille reading.
Veispak, Anneli; Boets, Bart; Ghesquière, Pol
2013-01-01
The relations between reading, auditory, speech, phonological and tactile spatial processing are investigated in a Dutch speaking sample of blind braille readers as compared to sighted print readers. Performance is assessed in blind and sighted children and adults. Regarding phonological ability, braille readers perform equally well compared to print readers on phonological awareness, better on verbal short-term memory and significantly worse on lexical retrieval. The groups do not differ on speech perception or auditory processing. Braille readers, however, have more sensitive fingers than print readers. Investigation of the relations between these cognitive and perceptual skills and reading performance indicates that in the group of braille readers auditory temporal processing has a longer lasting and stronger impact not only on phonological abilities, which have to satisfy the high processing demands of the strictly serial language input, but also directly on the reading ability itself. Print readers switch between grapho-phonological and lexical reading modes depending on the familiarity of the items. Furthermore, the auditory temporal processing and speech perception, which were substantially interrelated with phonological processing, had no direct associations with print reading measures. Copyright © 2012 Elsevier Ltd. All rights reserved.
Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types
Chiou, Shiau-Chuen; Chang, Erik Chihhung
2016-01-01
Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential “guidance effect” between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination. PMID:26895286
Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types.
Chiou, Shiau-Chuen; Chang, Erik Chihhung
2016-01-01
Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential "guidance effect" between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination.
Sapir, Shimon; Pud, Dorit
2008-01-01
To assess the effect of tonic pain stimulation on auditory processing of speech-relevant acoustic signals in healthy pain-free volunteers. Sixty university students, randomly assigned to either a thermal pain stimulation (46 degrees C/6 min) group (PS) or no pain stimulation group (NPS), performed a rate change detection task (RCDT) involving sinusoidally frequency-modulated vowel-like signals. Task difficulty was manipulated by changing the rate of the modulated signals (henceforth rate). Perceived pain intensity was evaluated using a visual analog scale (VAS) (0-100). Mean pain rating was approximately 33 in the PS group and approximately 3 in the NPS group. Pain stimulation was associated with poorer performance on the RCDT, but this trend was not statistically significant. Performance worsened with increasing rate of signal modulation in both groups (p < 0.0001), with no pain by rate interaction. The present findings indicate a trend whereby mild or moderate pain appears to affect auditory processing of speech-relevant acoustic signals. This trend, however, was not statistically significant. It is possible that more intense pain would yield more pronounced (deleterious) effects on auditory processing, but this needs to be verified empirically.
Auditory dysfunction associated with solvent exposure
2013-01-01
Background A number of studies have demonstrated that solvents may induce auditory dysfunction. However, there is still little knowledge regarding the main signs and symptoms of solvent-induced hearing loss (SIHL). The aim of this research was to investigate the association between solvent exposure and adverse effects on peripheral and central auditory functioning with a comprehensive audiological test battery. Methods Seventy-two solvent-exposed workers and 72 non-exposed workers were selected to participate in the study. The test battery comprised pure-tone audiometry (PTA), transient evoked otoacoustic emissions (TEOAE), Random Gap Detection (RGD) and Hearing-in-Noise test (HINT). Results Solvent-exposed subjects presented with poorer mean test results than non-exposed subjects. A bivariate and multivariate linear regression model analysis was performed. One model for each auditory outcome (PTA, TEOAE, RGD and HINT) was independently constructed. For all of the models solvent exposure was significantly associated with the auditory outcome. Age also appeared significantly associated with some auditory outcomes. Conclusions This study provides further evidence of the possible adverse effect of solvents on the peripheral and central auditory functioning. A discussion of these effects and the utility of selected hearing tests to assess SIHL is addressed. PMID:23324255
Music and language: relations and disconnections.
Kraus, Nina; Slater, Jessica
2015-01-01
Music and language provide an important context in which to understand the human auditory system. While they perform distinct and complementary communicative functions, music and language are both rooted in the human desire to connect with others. Since sensory function is ultimately shaped by what is biologically important to the organism, the human urge to communicate has been a powerful driving force in both the evolution of auditory function and the ways in which it can be changed by experience within an individual lifetime. This chapter emphasizes the highly interactive nature of the auditory system as well as the depth of its integration with other sensory and cognitive systems. From the origins of music and language to the effects of auditory expertise on the neural encoding of sound, we consider key themes in auditory processing, learning, and plasticity. We emphasize the unique role of the auditory system as the temporal processing "expert" in the brain, and explore relationships between communication and cognition. We demonstrate how experience with music and language can have a significant impact on underlying neural function, and that auditory expertise strengthens some of the very same aspects of sound encoding that are deficient in impaired populations. © 2015 Elsevier B.V. All rights reserved.
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Eye-movements intervening between two successive sounds disrupt comparisons of auditory location
Pavani, Francesco; Husain, Masud; Driver, Jon
2008-01-01
Summary Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here we studied instead whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5 secs delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array), or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d′) for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location. PMID:18566808
Eye-movements intervening between two successive sounds disrupt comparisons of auditory location.
Pavani, Francesco; Husain, Masud; Driver, Jon
2008-08-01
Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here, we studied, instead, whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5-s delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array) or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment 1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in thn the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d') for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location.
Negative Priming in Free Recall Reconsidered
ERIC Educational Resources Information Center
Hanczakowski, Maciej; Beaman, C. Philip; Jones, Dylan M.
2016-01-01
Negative priming in free recall is the finding of impaired memory performance when previously ignored auditory distracters become targets of encoding and retrieval. This negative priming has been attributed to an aftereffect of deploying inhibitory mechanisms that serve to suppress auditory distraction and minimize interference with learning and…
Auditory Emotional Cues Enhance Visual Perception
ERIC Educational Resources Information Center
Zeelenberg, Rene; Bocanegra, Bruno R.
2010-01-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…
Lifespan Differences in Cortical Dynamics of Auditory Perception
ERIC Educational Resources Information Center
Muller, Viktor; Gruber, Walter; Klimesch, Wolfgang; Lindenberger, Ulman
2009-01-01
Using electroencephalographic recordings (EEG), we assessed differences in oscillatory cortical activity during auditory-oddball performance between children aged 9-13 years, younger adults, and older adults. From childhood to old age, phase synchronization increased within and between electrodes, whereas whole power and evoked power decreased. We…
Comparison of Performance of Eight-Year-Old Children on Three Auditory Sequential Memory Tests.
ERIC Educational Resources Information Center
Chermak, Gail D.; O'Connell, Vickie I.
1981-01-01
Twenty normal children were administered three tests of auditory sequential memory. A Pearson product-moment correlation of .50 and coefficients of determination showed all but one relationship to be nonsignificant and predictability between pairs of scores to be poor. (Author)
ERIC Educational Resources Information Center
Semanchik, Karen
This teacher's guide presents a course for training hearing-impaired students to listen to, create, and perform music. It emphasizes development of individual skills and group participation, encouraging students to contribute a wide variety of auditory and musical abilities and experiences while developing auditory acuity and attention. A variety…
Qiu, S. R.; Norton, M. A.; Raman, R. N.; ...
2015-10-02
In this paper, high dielectric constant multilayer coatings are commonly used on high-reflection mirrors for high-peak-power laser systems because of their high laser-damage resistance. However, surface contaminants often lead to damage upon laser exposure, thus limiting the mirror’s lifetime and performance. One plausible approach to improve the overall mirror resistance against laser damage, including that induced by laser-contaminant coupling, is to coat the multilayers with a thin protective capping (absentee) layer on top of the multilayer coatings. An understanding of the underlying mechanism by which laser-particle interaction leads to capping layer damage is important for the rational design and selectionmore » of capping materials of high-reflection multilayer coatings. In this paper, we examine the responses of two candidate capping layer materials, made of SiO 2 and Al 2O 3, over silica-hafnia multilayer coatings. These are exposed to a single oblique shot of a 1053 nm laser beam (fluence ~10 J/cm 2, pulse length 14 ns), in the presence of Ti particles on the surface. We find that the two capping layers show markedly different responses to the laser-particle interaction. The Al 2O 3 cap layer exhibits severe damage, with the capping layer becoming completely delaminated at the particle locations. The SiO 2 capping layer, on the other hand, is only mildly modified by a shallow depression. Combining the observations with optical modeling and thermal/mechanical calculations, we argue that a high-temperature thermal field from plasma generated by the laser-particle interaction above a critical fluence is responsible for the surface modification of each capping layer. The great difference in damage behavior is mainly attributed to the large disparity in the thermal expansion coefficient of the two capping materials, with that of Al 2O 3 layer being about 15 times greater than that of SiO 2.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, S. R.; Norton, M. A.; Raman, R. N.
In this paper, high dielectric constant multilayer coatings are commonly used on high-reflection mirrors for high-peak-power laser systems because of their high laser-damage resistance. However, surface contaminants often lead to damage upon laser exposure, thus limiting the mirror’s lifetime and performance. One plausible approach to improve the overall mirror resistance against laser damage, including that induced by laser-contaminant coupling, is to coat the multilayers with a thin protective capping (absentee) layer on top of the multilayer coatings. An understanding of the underlying mechanism by which laser-particle interaction leads to capping layer damage is important for the rational design and selectionmore » of capping materials of high-reflection multilayer coatings. In this paper, we examine the responses of two candidate capping layer materials, made of SiO 2 and Al 2O 3, over silica-hafnia multilayer coatings. These are exposed to a single oblique shot of a 1053 nm laser beam (fluence ~10 J/cm 2, pulse length 14 ns), in the presence of Ti particles on the surface. We find that the two capping layers show markedly different responses to the laser-particle interaction. The Al 2O 3 cap layer exhibits severe damage, with the capping layer becoming completely delaminated at the particle locations. The SiO 2 capping layer, on the other hand, is only mildly modified by a shallow depression. Combining the observations with optical modeling and thermal/mechanical calculations, we argue that a high-temperature thermal field from plasma generated by the laser-particle interaction above a critical fluence is responsible for the surface modification of each capping layer. The great difference in damage behavior is mainly attributed to the large disparity in the thermal expansion coefficient of the two capping materials, with that of Al 2O 3 layer being about 15 times greater than that of SiO 2.« less
Systematic Review of Nontumor Pediatric Auditory Brainstem Implant Outcomes.
Noij, Kimberley S; Kozin, Elliott D; Sethi, Rosh; Shah, Parth V; Kaplan, Alyson B; Herrmann, Barbara; Remenschneider, Aaron; Lee, Daniel J
2015-11-01
The auditory brainstem implant (ABI) was initially developed for patients with deafness as a result of neurofibromatosis type 2. ABI indications have recently extended to children with congenital deafness who are not cochlear implant candidates. Few multi-institutional outcome data exist. Herein, we aim to provide a systematic review of outcomes following implantation of the ABI in pediatric patients with nontumor diagnosis, with a focus on audiometric outcomes. PubMed, Embase, and Cochrane. A systematic review of literature was performed using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) recommendations. Variables assessed included age at implantation, diagnosis, medical history, cochlear implant history, radiographic findings, ABI device implanted, surgical approach, complications, side effects, and auditory outcomes. The initial search identified 304 articles; 21 met inclusion criteria for a total of 162 children. The majority of these patients had cochlear nerve aplasia (63.6%, 103 of 162). Cerebrospinal fluid leak occurred in up to 8.5% of cases. Audiometric outcomes improved over time. After 5 years, almost 50% of patients reached Categories of Auditory Performance scores >4; however, patients with nonauditory disabilities did not demonstrate a similar increase in scores. ABI surgery is a reasonable option for the habilitation of deaf children who are not cochlear implant candidates. Although improvement in Categories of Auditory Performance scores was seen across studies, pediatric ABI users with nonauditory disabilities have inferior audiometric outcomes. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
Auditory displays as occasion setters.
Mckeown, Denis; Isherwood, Sarah; Conway, Gareth
2010-02-01
The aim of this study was to evaluate whether representational sounds that capture the richness of experience of a collision enhance performance in braking to avoid a collision relative to other forms of warnings in a driving simulator. There is increasing interest in auditory warnings that are informative about their referents. But as well as providing information about some intended object, warnings may be designed to set the occasion for a rich body of information about the outcomes of behavior in a particular context. These richly informative warnings may offer performance advantages, as they may be rapidly processed by users. An auditory occasion setter for a collision (a recording of screeching brakes indicating imminent collision) was compared with two other auditory warnings (an abstract and an "environmental" sound), a speech message, a visual display, and no warning in a fixed-base driving simulator as interfaces to a collision avoidance system. The main measure was braking response times at each of two headways (1.5 s and 3 s) to a lead vehicle. The occasion setter demonstrated statistically significantly faster braking responses at each headway in 8 out of 10 comparisons (with braking responses equally fast to the abstract warning at 1.5 s and the environmental warning at 3 s). Auditory displays that set the occasion for an outcome in a particular setting and for particular behaviors may offer small but critical performance enhancements in time-critical applications. The occasion setter could be applied in settings where speed of response by users is of the essence.
Lopez, William Omar Contreras; Higuera, Carlos Andres Escalante; Fonoff, Erich Talamoni; Souza, Carolina de Oliveira; Albicker, Ulrich; Martinez, Jairo Alberto Espinoza
2014-10-01
Evidence supports the use of rhythmic external auditory signals to improve gait in PD patients (Arias & Cudeiro, 2008; Kenyon & Thaut, 2000; McIntosh, Rice & Thaut, 1994; McIntosh et al., 1997; Morris, Iansek, & Matyas, 1994; Thaut, McIntosh, & Rice, 1997; Suteerawattananon, Morris, Etnyre, Jankovic, & Protas , 2004; Willems, Nieuwboer, Chavert, & Desloovere, 2006). However, few prototypes are available for daily use, and to our knowledge, none utilize a smartphone application allowing individualized sounds and cadence. Therefore, we analyzed the effects on gait of Listenmee®, an intelligent glasses system with a portable auditory device, and present its smartphone application, the Listenmee app®, offering over 100 different sounds and an adjustable metronome to individualize the cueing rate as well as its smartwatch with accelerometer to detect magnitude and direction of the proper acceleration, track calorie count, sleep patterns, steps count and daily distances. The present study included patients with idiopathic PD presented gait disturbances including freezing. Auditory rhythmic cues were delivered through Listenmee®. Performance was analyzed in a motion and gait analysis laboratory. The results revealed significant improvements in gait performance over three major dependent variables: walking speed in 38.1%, cadence in 28.1% and stride length in 44.5%. Our findings suggest that auditory cueing through Listenmee® may significantly enhance gait performance. Further studies are needed to elucidate the potential role and maximize the benefits of these portable devices. Copyright © 2014 Elsevier B.V. All rights reserved.
How visual cues for when to listen aid selective auditory attention.
Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G
2012-06-01
Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.
NASA Astrophysics Data System (ADS)
Lim, Emmanuel; Kuznetsov, Aleksey E.; Beratan, David N.
2012-10-01
To understand ligand capping effects on the structure and electronic properties of CdnXn (X = Se, Te; n = 3, 4, 6, and 9) species, we performed density functional theory studies of SCH2COOH-, SCH2CH2CO2H-, and SCH2CH2NH2-capped nanoparticles. CdnXn capping with all three capping groups was found to produce significant NP distortions. All three ligands destabilize the NP HOMOs and either stabilize or destabilize their LUMOs, leading to closure of the HOMO/LUMO gaps for all of the capped species, because the HOMO destabilization effect is generally large than the LUMO destabilization effect. The calculated absorption spectra of bare and capped NPs, exemplified by CdnXn with n = 4 and 6, show that all capping groups cause noticeable red shifts for n = 4 and mostly blue shifts for n = 6.
Laser rods with undoped, flanged end-caps for end-pumped laser applications
Meissner, Helmuth E.; Beach, Raymond J.; Bibeau, Camille; Sutton, Steven B.; Mitchell, Scott; Bass, Isaac; Honea, Eric
1999-01-01
A method and apparatus for achieving improved performance in a solid state laser is provided. A flanged, at least partially undoped end-cap is attached to at least one end of a laserable medium. Preferably flanged, undoped end-caps are attached to both ends of the laserable medium. Due to the low scatter requirements for the interface between the end-caps and the laser rod, a non-adhesive method of bonding is utilized such as optical contacting combined with a subsequent heat treatment of the optically contacted composite. The non-bonded end surfaces of the flanged end-caps are coated with laser cavity coatings appropriate for the lasing wavelength of the laser rod. A cooling jacket, sealably coupled to the flanged end-caps, surrounds the entire length of the laserable medium. Radiation from a pump source is focussed by a lens duct and passed through at least one flanged end-cap into the laser rod.
NASA Astrophysics Data System (ADS)
Vidhya, K.; Devarajan, V. P.; Viswanathan, C.; Nataraj, D.; Bhoopathi, G.
2013-06-01
In this study, we have investigated the bacterial activity of starch capped ZnO & CdO NPs. The NPs were prepared through green technique under room temperature and then obtained samples were characterized by using XRD and PL techniques. XRD pattern confirms the crystal nature it shows hexagonal structure for ZnO NPs and monoclinic structure for CdO NPs and their average particle size is ±20 nm. Further, the optical properties of NPs were investigated using PL technique in which the starch capped ZnO NPs shows maximum emission at 440 nm whereas starch capped CdO NPs shows maximum emission at 545 nm. Finally, toxic test was performed with E.coli bacteria and their results were investigated. Hence, starch capped ZnO NPs induced less killing effect when compared with starch capped CdO NPs. Therefore, we conclude that the starch capped ZnO NPs may be less toxic to microorganisms when compared with starch capped CdO NPs. In addition, starch capped ZnO NPs is also suitable for anti-microbial activity.
Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H
2015-01-01
Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.
Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J.; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M.; Lenarz, Thomas; Lim, Hubert H.
2015-01-01
Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus. PMID:26046763
Crinion, Jenny; Price, Cathy J
2005-12-01
Previous studies have suggested that recovery of speech comprehension after left hemisphere infarction may depend on a mechanism in the right hemisphere. However, the role that distinct right hemisphere regions play in speech comprehension following left hemisphere stroke has not been established. Here, we used functional magnetic resonance imaging (fMRI) to investigate narrative speech activation in 18 neurologically normal subjects and 17 patients with left hemisphere stroke and a history of aphasia. Activation for listening to meaningful stories relative to meaningless reversed speech was identified in the normal subjects and in each patient. Second level analyses were then used to investigate how story activation changed with the patients' auditory sentence comprehension skills and surprise story recognition memory tests post-scanning. Irrespective of lesion site, performance on tests of auditory sentence comprehension was positively correlated with activation in the right lateral superior temporal region, anterior to primary auditory cortex. In addition, when the stroke spared the left temporal cortex, good performance on tests of auditory sentence comprehension was also correlated with the left posterior superior temporal cortex (Wernicke's area). In distinct contrast to this, good story recognition memory predicted left inferior frontal and right cerebellar activation. The implication of this double dissociation in the effects of auditory sentence comprehension and story recognition memory is that left frontal and left temporal activations are dissociable. Our findings strongly support the role of the right temporal lobe in processing narrative speech and, in particular, auditory sentence comprehension following left hemisphere aphasic stroke. In addition, they highlight the importance of the right anterior superior temporal cortex where the response was dissociated from that in the left posterior temporal lobe.
Nawroth, Christian; von Borell, Eberhard
2015-05-01
Recently, foraging strategies have been linked to the ability to use indirect visual information. More selective feeders should express a higher aversion against losses compared to non-selective feeders and should therefore be more prone to avoid empty food locations. To extend these findings, in this study, we present a series of studies investigating the use of direct and indirect visual and auditory information by an omnivorous but selective feeder-the domestic pig. Subjects had to choose between two buckets, with only one containing a reward. Before making a choice, the subjects in Experiment 1 (N = 8) received full information regarding both the baited and non-baited location, either in a visual or auditory domain. In this experiment, the subjects were able to use visual but not auditory cues to infer the location of the reward spontaneously. Additionally, four individuals learned to use auditory cues after a period of training. In Experiment 2 (N = 8), the pigs were given different amounts of visual information about the content of the buckets-lifting either both of the buckets (full information), the baited bucket (direct information), the empty bucket (indirect information) or no bucket at all (no information). The subjects as a group were able to use direct and indirect visual cues. However, over the course of the experiment, the performance dropped to chance level when indirect information was provided. A final experiment (N = 3) provided preliminary results for pigs' use of indirect auditory information to infer the location of a reward. We conclude that pigs at a very young age are able to make decisions based on indirect information in the visual domain, whereas their performance in the use of indirect auditory information warrants further investigation.
Sekiguchi, Yusuke; Honda, Keita; Ishiguro, Akio
2016-01-01
Sensory impairments caused by neurological or physical disorders hamper kinesthesia, making rehabilitation difficult. In order to overcome this problem, we proposed and developed a novel biofeedback prosthesis called Auditory Foot for transforming sensory modalities, in which the sensor prosthesis transforms plantar sensations to auditory feedback signals. This study investigated the short-term effect of the auditory feedback prosthesis on walking in stroke patients with hemiparesis. To evaluate the effect, we compared four conditions of auditory feedback from plantar sensors at the heel and fifth metatarsal. We found significant differences in the maximum hip extension angle and ankle plantar flexor moment on the affected side during the stance phase, between conditions with and without auditory feedback signals. These results indicate that our sensory prosthesis could enhance walking performance in stroke patients with hemiparesis, resulting in effective short-term rehabilitation. PMID:27547456
Aydinli, Fatma Esen; Çak, Tuna; Kirazli, Meltem Çiğdem; Çinar, Betül Çiçek; Pektaş, Alev; Çengel, Ebru Kültür; Aksoy, Songül
Attention deficit hyperactivity disorder is a common impairing neuropsychiatric disorder with onset in early childhood. Almost half of the children with attention deficit hyperactivity disorder also experience a variety of motor-related dysfunctions ranging from fine/gross motor control problems to difficulties in maintaining balance. The main purpose of this study was to investigate the effects of distractors two different auditory distractors namely, relaxing music and white noise on upright balance performance in children with attention deficit hyperactivity disorder. We compared upright balance performance and the involvement of different sensory systems in the presence of auditory distractors between school-aged children with attention deficit hyperactivity disorder (n=26) and typically developing controls (n=20). Neurocom SMART Balance Master Dynamic Posturography device was used for the sensory organization test. Sensory organization test was repeated three times for each participant in three different test environments. The balance scores in the silence environment were lower in the attention deficit hyperactivity disorder group but the differences were not statistically significant. In addition to lower balance scores the visual and vestibular ratios were also lower. Auditory distractors affected the general balance performance positively for both groups. More challenging conditions, using an unstable platform with distorted somatosensory signals were more affected. Relaxing music was more effective in the control group, and white noise was more effective in the attention deficit hyperactivity disorder group and the positive effects of white noise became more apparent in challenging conditions. To the best of our knowledge, this is the first study evaluating balance performance in children with attention deficit hyperactivity disorder under the effects of auditory distractors. Although more studies are needed, our results indicate that auditory distractors may have enhancing effects on upright balance performance in children with attention deficit hyperactivity disorder. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Lifespan differences in nonlinear dynamics during rest and auditory oddball performance.
Müller, Viktor; Lindenberger, Ulman
2012-07-01
Electroencephalographic recordings (EEG) were used to assess age-associated differences in nonlinear brain dynamics during both rest and auditory oddball performance in children aged 9.0-12.8 years, younger adults, and older adults. We computed nonlinear coupling dynamics and dimensional complexity, and also determined spectral alpha power as an indicator of cortical reactivity. During rest, both nonlinear coupling and spectral alpha power decreased with age, whereas dimensional complexity increased. In contrast, when attending to the deviant stimulus, nonlinear coupling increased with age, and complexity decreased. Correlational analyses showed that nonlinear measures assessed during auditory oddball performance were reliably related to an independently assessed measure of perceptual speed. We conclude that cortical dynamics during rest and stimulus processing undergo substantial reorganization from childhood to old age, and propose that lifespan age differences in nonlinear dynamics during stimulus processing reflect lifespan changes in the functional organization of neuronal cell assemblies. © 2012 Blackwell Publishing Ltd.
Magnetic resonance imaging abnormalities in familial temporal lobe epilepsy with auditory auras.
Kobayashi, Eliane; Santos, Neide F; Torres, Fabio R; Secolin, Rodrigo; Sardinha, Luiz A C; Lopez-Cendes, Iscia; Cendes, Fernando
2003-11-01
Two forms of familial temporal lobe epilepsy (FTLE) have been described: mesial FTLE and FTLE with auditory auras. The gene responsible for mesial FTLE has not been mapped yet, whereas mutations in the LGI1 (leucine-rich, glioma-inactivated 1) gene, localized on chromosome 10q, have been found in FTLE with auditory auras. To describe magnetic resonance imaging (MRI) findings in patients with FTLE with auditory auras. We performed detailed clinical and molecular studies as well as MRI evaluation (including volumetry) in all available individuals from one family, segregating FTLE from auditory auras. We evaluated 18 of 23 possibly affected individuals, and 13 patients reported auditory auras. In one patient, auditory auras were associated with déjà vu; in one patient, with ictal aphasia; and in 2 patients, with visual misperception. Most patients were not taking medication at the time, although all of them reported sporadic auras. Two-point lod scores were positive for 7 genotyped markers on chromosome 10q, and a Zmax of 6.35 was achieved with marker D10S185 at a recombination fraction of 0.0. Nucleotide sequence analysis of the LGI1 gene showed a point mutation, VIIIS7(-2)A-G, in all affected individuals. Magnetic resonance imaging was performed in 22 individuals (7 asymptomatic, 4 of them carriers of the affected haplotype on chromosome 10q and the VIIIS7[-2]A-G mutation). Lateral temporal lobe malformations were identified by visual analysis in 10 individuals, 2 of them with global enlargement demonstrated by volumetry. Mildly reduced hippocampi were observed in 4 individuals. In this family with FTLE with auditory auras, we found developmental abnormalities in the lateral cortex of the temporal lobes in 53% of the affected individuals. In contrast with mesial FTLE, none of the affected individuals had MRI evidence of hippocampal sclerosis.
Speech comprehension aided by multiple modalities: behavioural and neural interactions
McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K.
2014-01-01
Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources – e.g. voice, face, gesture, linguistic context – to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. PMID:22266262
Speech comprehension aided by multiple modalities: behavioural and neural interactions.
McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K
2012-04-01
Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. Copyright © 2012 Elsevier Ltd. All rights reserved.
White matter microstructural properties correlate with sensorimotor synchronization abilities.
Blecher, Tal; Tal, Idan; Ben-Shachar, Michal
2016-09-01
Sensorimotor synchronization (SMS) to an external auditory rhythm is a developed ability in humans, particularly evident in dancing and singing. This ability is typically measured in the lab via a simple task of finger tapping to an auditory beat. While simplistic, there is some evidence that poor performance on this task could be related to impaired phonological and reading abilities in children. Auditory-motor synchronization is hypothesized to rely on a tight coupling between auditory and motor neural systems, but the specific pathways that mediate this coupling have not been identified yet. In this study, we test this hypothesis and examine the contribution of fronto-temporal and callosal connections to specific measures of rhythmic synchronization. Twenty participants went through SMS and diffusion magnetic resonance imaging (dMRI) measurements. We quantified the mean asynchrony between an auditory beat and participants' finger taps, as well as the time to resynchronize (TTR) with an altered meter, and examined the correlations between these behavioral measures and diffusivity in a small set of predefined pathways. We found significant correlations between asynchrony and fractional anisotropy (FA) in the left (but not right) arcuate fasciculus and in the temporal segment of the corpus callosum. On the other hand, TTR correlated with FA in the precentral segment of the callosum. To our knowledge, this is the first demonstration that relates these particular white matter tracts with performance on an auditory-motor rhythmic synchronization task. We propose that left fronto-temporal and temporal-callosal fibers are involved in prediction and constant comparison between auditory inputs and motor commands, while inter-hemispheric connections between the motor/premotor cortices contribute to successful resynchronization of motor responses with a new external rhythm, perhaps via inhibition of tapping to the previous rhythm. Our results indicate that auditory-motor synchronization skills are associated with anatomical pathways that have been previously related to phonological awareness, thus offering a possible anatomical basis for the behavioral covariance between these abilities. Copyright © 2016 Elsevier Inc. All rights reserved.
Fritz, Jonathan B.; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C.
2016-01-01
While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30–40 seconds to a duration of ~1–2 seconds, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. PMID:26707975
Predictive cues for auditory stream formation in humans and monkeys.
Aggelopoulos, Nikolaos C; Deike, Susann; Selezneva, Elena; Scheich, Henning; Brechmann, André; Brosch, Michael
2017-12-18
Auditory perception is improved when stimuli are predictable, and this effect is evident in a modulation of the activity of neurons in the auditory cortex as shown previously. Human listeners can better predict the presence of duration deviants embedded in stimulus streams with fixed interonset interval (isochrony) and repeated duration pattern (regularity), and neurons in the auditory cortex of macaque monkeys have stronger sustained responses in the 60-140 ms post-stimulus time window under these conditions. Subsequently, the question has arisen whether isochrony or regularity in the sensory input contributed to the enhancement of the neuronal and behavioural responses. Therefore, we varied the two factors isochrony and regularity independently and measured the ability of human subjects to detect deviants embedded in these sequences as well as measuring the responses of neurons the primary auditory cortex of macaque monkeys during presentations of the sequences. The performance of humans in detecting deviants was significantly increased by regularity. Isochrony enhanced detection only in the presence of the regularity cue. In monkeys, regularity increased the sustained component of neuronal tone responses in auditory cortex while isochrony had no consistent effect. Although both regularity and isochrony can be considered as parameters that would make a sequence of sounds more predictable, our results from the human and monkey experiments converge in that regularity has a greater influence on behavioural performance and neuronal responses. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Stevenson, Ryan A; Schlesinger, Joseph J; Wallace, Mark T
2013-02-01
Anesthesiology requires performing visually oriented procedures while monitoring auditory information about a patient's vital signs. A concern in operating room environments is the amount of competing information and the effects that divided attention has on patient monitoring, such as detecting auditory changes in arterial oxygen saturation via pulse oximetry. The authors measured the impact of visual attentional load and auditory background noise on the ability of anesthesia residents to monitor the pulse oximeter auditory display in a laboratory setting. Accuracies and response times were recorded reflecting anesthesiologists' abilities to detect changes in oxygen saturation across three levels of visual attention in quiet and with noise. Results show that visual attentional load substantially affects the ability to detect changes in oxygen saturation concentrations conveyed by auditory cues signaling 99 and 98% saturation. These effects are compounded by auditory noise, up to a 17% decline in performance. These deficits are seen in the ability to accurately detect a change in oxygen saturation and in speed of response. Most anesthesia accidents are initiated by small errors that cascade into serious events. Lack of monitor vigilance and inattention are two of the more commonly cited factors. Reducing such errors is thus a priority for improving patient safety. Specifically, efforts to reduce distractors and decrease background noise should be considered during induction and emergence, periods of especially high risk, when anesthesiologists has to attend to many tasks and are thus susceptible to error.
ERIC Educational Resources Information Center
Ortega-Maldonado, Alberto; Salanova, Marisa
2018-01-01
This study explores the predictive relationships between psychological capital (PsyCap), meaning-focused coping, satisfaction and performance among undergraduate students. Six hundred and eighty two (n = 682) college students from 29 different academic programmes completed an academic well-being survey, which included measures of PsyCap, coping…
Farmer, Steven A; Moghtaderi, Ali; Schilsky, Samantha; Magid, David; Sage, William; Allen, Nori; Masoudi, Frederick A; Dor, Avi; Black, Bernard
2018-06-06
Physicians often report practicing defensive medicine to reduce malpractice risk, including performing expensive but marginally beneficial tests and procedures. Although there is little evidence that malpractice reform affects overall health care spending, it may influence physician behavior for specific conditions involving clinical uncertainty. To examine whether reducing malpractice risk is associated with clinical decisions involving coronary artery disease testing and treatment. Difference-in-differences design, comparing physician-specific changes in coronary artery disease testing and treatment in 9 new-cap states that adopted damage caps between 2003 and 2005 with 20 states without caps. We used the 5% national Medicare fee-for-service random sample between 1999 and 2013. Physicians (n = 75 801; 36 647 in new-cap states) who ordered or performed 2 or more coronary angiographies. Data were analyzed from June 2015 to January 2018. Changes in ischemic evaluation rates for possible coronary artery disease, type of initial evaluation (stress testing or coronary angiography), progression from stress test to angiography, and progression from ischemic evaluation to revascularization (percutaneous coronary intervention or coronary artery bypass grafting). We studied 36 647 physicians in new-cap states and 39 154 physicians in no-cap states. New-cap states had younger populations, more minorities, lower per-capita incomes, fewer physicians per capita, and lower managed care penetration. Following cap adoption, new-cap physicians reduced invasive testing (angiography) as a first diagnostic test compared with control physicians (relative change, -24%; 95% CI, -40% to -7%; P = .005) with an offsetting increase in noninvasive stress testing (7.8%; 95% CI, -3.6% to 19.3%; P = .17), and referred fewer patients for angiography following stress testing (-21%; 95% CI, -40% to -2%; P = .03). New-cap physicians also reduced revascularization rates after ischemic evaluation (-23%; 95% CI, -40% to -4%; P = .02; driven by fewer percutaneous coronary interventions). Changes in overall ischemic evaluation rates were similar for new-cap and control physicians (-0.05%; 95% CI, -8.0% to 7.9%; P = .98). Physicians substantially altered their approach to coronary artery disease testing and follow-up after initial ischemic evaluations following adoption of damage caps. They performed a similar number of ischemic evaluations but conducted fewer initial left heart catheterizations, referred fewer stress-tested patients for left heart catheterizations, and referred fewer patients for revascularization. These findings suggest that physicians tolerate greater clinical uncertainty in coronary artery disease testing and treatment if they face lower malpractice risk.
The ability for cocaine and cocaine-associated cues to compete for attention
Pitchers, Kyle K.; Wood, Taylor R.; Skrzynski, Cari J.; Robinson, Terry E.; Sarter, Martin
2017-01-01
In humans, reward cues, including drug cues in addicts, are especially effective in biasing attention towards them, so much so they can disrupt ongoing task performance. It is not known, however, whether this happens in rats. To address this question, we developed a behavioral paradigm to assess the capacity of an auditory drug (cocaine) cue to evoke cocaine-seeking behavior, thus distracting thirsty rats from performing a well-learned sustained attention task (SAT) to obtain a water reward. First, it was determined that an auditory cocaine cue (tone-CS) reinstated drug-seeking equally in sign-trackers (STs) and goal-trackers (GTs), which otherwise vary in the propensity to attribute incentive salience to a localizable drug cue. Next, we tested the ability of an auditory cocaine cue to disrupt performance on the SAT in STs and GTs. Rats were trained to self-administer cocaine intravenously using an Intermittent Access self-administration procedure known to produce a progressive increase in motivation for cocaine, escalation of intake, and strong discriminative stimulus control over drug-seeking behavior. When presented alone, the auditory discriminative stimulus elicited cocaine-seeking behavior while rats were performing the SAT, but it was not sufficiently disruptive to impair SAT performance. In contrast, if cocaine was available in the presence of the cue, or when administered non-contingently, SAT performance was severely disrupted. We suggest that performance on a relatively automatic, stimulus-driven task, such as the basic version of the SAT used here, may be difficult to disrupt with a drug cue alone. A task that requires more top-down cognitive control may be needed. PMID:27890441
Psychological Predictors of Visual and Auditory P300 Brain-Computer Interface Performance
Hammer, Eva M.; Halder, Sebastian; Kleih, Sonja C.; Kübler, Andrea
2018-01-01
Brain-Computer Interfaces (BCIs) provide communication channels independent from muscular control. In the current study we used two versions of the P300-BCI: one based on visual the other on auditory stimulation. Up to now, data on the impact of psychological variables on P300-BCI control are scarce. Hence, our goal was to identify new predictors with a comprehensive psychological test-battery. A total of N = 40 healthy BCI novices took part in a visual and an auditory BCI session. Psychological variables were measured with an electronic test-battery including clinical, personality, and performance tests. The personality factor “emotional stability” was negatively correlated (Spearman's rho = −0.416; p < 0.01) and an output variable of the non-verbal learning test (NVLT), which can be interpreted as ability to learn, correlated positively (Spearman's rho = 0.412; p < 0.01) with visual P300-BCI performance. In a linear regression analysis both independent variables explained 24% of the variance. “Emotional stability” was also negatively related to auditory P300-BCI performance (Spearman's rho = −0.377; p < 0.05), but failed significance in the regression analysis. Psychological parameters seem to play a moderate role in visual P300-BCI performance. “Emotional stability” was identified as a new predictor, indicating that BCI users who characterize themselves as calm and rational showed worse BCI performance. The positive relation of the ability to learn and BCI performance corroborates the notion that also for P300 based BCIs learning may constitute an important factor. Further studies are needed to consolidate or reject the presented predictors. PMID:29867319
Single electrode micro-stimulation of rat auditory cortex: an evaluation of behavioral performance.
Rousche, Patrick J; Otto, Kevin J; Reilly, Mark P; Kipke, Daryl R
2003-05-01
A combination of electrophysiological mapping, behavioral analysis and cortical micro-stimulation was used to explore the interrelation between the auditory cortex and behavior in the adult rat. Auditory discriminations were evaluated in eight rats trained to discriminate the presence or absence of a 75 dB pure tone stimulus. A probe trial technique was used to obtain intensity generalization gradients that described response probabilities to mid-level tones between 0 and 75 dB. The same rats were then chronically implanted in the auditory cortex with a 16 or 32 channel tungsten microwire electrode array. Implanted animals were then trained to discriminate the presence of single electrode micro-stimulation of magnitude 90 microA (22.5 nC/phase). Intensity generalization gradients were created to obtain the response probabilities to mid-level current magnitudes ranging from 0 to 90 microA on 36 different electrodes in six of the eight rats. The 50% point (the current level resulting in 50% detections) varied from 16.7 to 69.2 microA, with an overall mean of 42.4 (+/-8.1) microA across all single electrodes. Cortical micro-stimulation induced sensory-evoked behavior with similar characteristics as normal auditory stimuli. The results highlight the importance of the auditory cortex in a discrimination task and suggest that micro-stimulation of the auditory cortex might be an effective means for a graded information transfer of auditory information directly to the brain as part of a cortical auditory prosthesis.
McCreery, Ryan W; Walker, Elizabeth A; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia
2015-01-01
Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HAs) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children's auditory experience on parent-reported auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Parent ratings on auditory development questionnaires and children's speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children rating scale, and an adaptation of the Speech, Spatial, and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open- and Closed-Set Test, Early Speech Perception test, Lexical Neighborhood Test, and Phonetically Balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared with peers with normal hearing matched for age, maternal educational level, and nonverbal intelligence. The effects of aided audibility, HA use, and language ability on parent responses to auditory development questionnaires and on children's speech recognition were also examined. Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use, and better language abilities generally had higher parent ratings of auditory skills and better speech-recognition abilities in quiet and in noise than peers with less audibility, more limited HA use, or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Children who are hard of hearing continue to experience delays in auditory skill development and speech-recognition abilities compared with peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported before the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech-recognition abilities and also may enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children's speech recognition.
The Effect of a Homework Grade Cap in an Introductory Finance Class
ERIC Educational Resources Information Center
Cannonier, Colin; Chen, Dennis; Smolira, Joe
2016-01-01
The authors used data collected from various sections of principles of finance classes at a private university to examine the effect of utilizing a homework grade cap policy. The results indicate that the homework grade cap policy increased the homework scores and that an increase in homework scores improved performance of the students on exams.…
Incorporating Auditory Models in Speech/Audio Applications
NASA Astrophysics Data System (ADS)
Krishnamoorthi, Harish
2011-12-01
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
Using Music as a Background for Reading: An Exploratory Study.
ERIC Educational Resources Information Center
Mulliken, Colleen N.; Henk, William A.
1985-01-01
Reports on a study during which intermediate level students were exposed to three auditory backgrounds while reading (no music, classical music, and rock music), and their subsequent comprehension performance was measured. Concludes that the auditory background during reading may affect comprehension and that, for most students, rock music should…
ERIC Educational Resources Information Center
McKeown, Denis; Wellsted, David
2009-01-01
Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…
Information Processing in Auditory-Visual Conflict.
ERIC Educational Resources Information Center
Henker, Barbara A.; Whalen, Carol K.
1972-01-01
The present study used a set of bimodal (auditory-visual) conflict designed specifically for the preschool child. The basic component was a match-to-sample sequence designed to reduce the often-found contaminating factors in studies with young children: failure to understand or remember instructions, inability to perform the indicator response, or…
DOT National Transportation Integrated Search
1988-08-01
This paper deals with the use of response/recovery rate to auditory startle as a laboratory technique for simulating some of the principal aspects of the initial shock phase of sudden emergency situations. It is submitted that auditory startle, with ...
Developmental Trends in Recall of Central and Incidental Auditory
ERIC Educational Resources Information Center
Hallahan, Daniel P.; And Others
1974-01-01
An auditory recall task involving central and incidental stimuli designed to correspond to processes used in selective attention, was presented to elementary school students. Older children and girls performed better than younger children and boys, especially when animals were the relevant and food the irrelevant stimuli. (DP)
Perceptual Learning Style and Learning Proficiency: A Test of the Hypothesis
ERIC Educational Resources Information Center
Kratzig, Gregory P.; Arbuthnott, Katherine D.
2006-01-01
Given the potential importance of using modality preference with instruction, the authors tested whether learning style preference correlated with memory performance in each of 3 sensory modalities: visual, auditory, and kinesthetic. In Study 1, participants completed objective measures of pictorial, auditory, and tactile learning and learning…
Detection and localization of sounds: Virtual tones and virtual reality
NASA Astrophysics Data System (ADS)
Zhang, Peter Xinya
Modern physiologically based binaural models employ internal delay lines in the pathways from left and right peripheries to central processing nuclei. Various models apply the delay lines differently, and give different predictions for the detection of dichotic pitches, wherein listeners hear a virtual tone in the noise background. Two dichotic pitch stimuli (Huggins pitch and binaural coherence edge pitch) with low boundary frequencies were used to test the predictions by two different models. The results from five experiments show that the relative dichotic pitch strengths support the equalization-cancellation model and disfavor the central activity pattern (CAP) model. The CAP model makes predictions for the lateralization of Huggins pitch based on interaural time differences (ITD). By measuring human lateralization for Huggins pitches with two different types of phase boundaries (linear-phase and stepped phase), and by comparing with lateralization of sine-tones, it was shown that the lateralization of Huggins pitch stimuli is similar to that of the corresponding sine-tones, and the lateralizations of Huggins pitch stimuli with the two different boundaries were even more similar to one another. The results agreed roughly with the CAP model predictions. Agreement was significantly improved by incorporating individualized scale factors and offsets into the model, and was further unproved with a model including compression at large ITDs. Furthermore, ambiguous stimuli, with an interaural phase difference of 180 degrees, were consistently lateralized on the left or right based on individual asymmetries---which introduces the concept of "earedness". Interaural phase difference (IPD) and interaural time difference (ITD) are two different forms of temporal cues. With varying frequency, an auditory system based on IPD or ITD gives different quantitative predictions on lateralization. A lateralization experiment with sine tones tested whether human auditory system is an IPD-meter or an ITD-meter. Listeners estimated the lateral positions of 50 sine tones with IPDs ranging from -150° to +150° and with different frequencies, all in the range where signal fine structure supports lateralization. The estimates indicated that listeners lateralize sine tones on the basis of ITD and not IPD. In order to distinguish between sound sources in front and in back, listeners use spectral cues caused by the diffraction by pinna, head, neck and torso. To study this effect, the VRX technique was developed based on transaural technology. The technique was successful in presenting desired spectra into listeners' ears with high accuracy up to 16 kHz. When presented with real source and simulated virtual signal, listeners in an anechoic room could not distinguish between them. Eleven experiments on discrimination between front and back sources were carried out in an anechoic room. The results show several findings. First, the results support a multiple band comparison model, and disfavor a necessary band(s) model. Second, it was found that preserving the spectral dips was more important than preserving the spectral peaks for successful front/back discrimination. Moreover, it was confirmed that neither monaural cues nor interaural spectral level difference cues were adequate for front/back discrimination. Furthermore, listeners' performance did not deteriorate when presented with sharpened spectra. Finally, when presented with an interaural delay less than 200 mus, listeners could succeed to discriminate front from back, although the image was pulled to the side, which suggests that the localizations in azimuthal plane and in sagittal plane are independent within certain limits.
The effect of spatial auditory landmarks on ambulation.
Karim, Adham M; Rumalla, Kavelin; King, Laurie A; Hullar, Timothy E
2018-02-01
The maintenance of balance and posture is a result of the collaborative efforts of vestibular, proprioceptive, and visual sensory inputs, but a fourth neural input, audition, may also improve balance. Here, we tested the hypothesis that auditory inputs function as environmental spatial landmarks whose effectiveness depends on sound localization ability during ambulation. Eight blindfolded normal young subjects performed the Fukuda-Unterberger test in three auditory conditions: silence, white noise played through headphones (head-referenced condition), and white noise played through a loudspeaker placed directly in front at 135 centimeters away from the ear at ear height (earth-referenced condition). For the earth-referenced condition, an additional experiment was performed where the effect of moving the speaker azimuthal position to 45, 90, 135, and 180° was tested. Subjects performed significantly better in the earth-referenced condition than in the head-referenced or silent conditions. Performance progressively decreased over the range from 0° to 135° but all subjects then improved slightly at the 180° compared to the 135° condition. These results suggest that presence of sound dramatically improves the ability to ambulate when vision is limited, but that sound sources must be located in the external environment in order to improve balance. This supports the hypothesis that they act by providing spatial landmarks against which head and body movement and orientation may be compared and corrected. Balance improvement in the azimuthal plane mirrors sensitivity to sound movement at similar positions, indicating that similar auditory mechanisms may underlie both processes. These results may help optimize the use of auditory cues to improve balance in particular patient populations. Copyright © 2017 Elsevier B.V. All rights reserved.
Neurocognitive screening of lead-exposed andean adolescents and young adults.
Counter, S Allen; Buchanan, Leo H; Ortega, Fernando
2009-01-01
This study was designed to assess the utility of two psychometric tests with putative minimal cultural bias for use in field screening of lead (Pb)-exposed Ecuadorian Andean workers. Specifically, the study evaluated the effectiveness in Pb-exposed adolescents and young adults of a nonverbal reasoning test standardized for younger children, and compared the findings with performance on a test of auditory memory. The Raven Coloured Progressive Matrices (RCPM) was used as a test of nonverbal intelligence, and the Digit Span subtest of the Wechsler IV intelligence scale was used to assess auditory memory/attention. The participants were 35 chronically Pb-exposed Pb-glazing workers, aged 12-21 yr. Blood lead (PbB) levels for the study group ranged from 3 to 86 microg/dl, with 65.7% of the group at and above 10 microg/dl. Zinc protoporphyrin heme ratios (ZPP/heme) ranged from 38 to 380 micromol/mol, with 57.1% of the participants showing abnormal ZPP/heme (>69 micromol/mol). ZPP/heme was significantly correlated with PbB levels, suggesting chronic Pb exposure. Performance on the RCPM was less than average on the U.S., British, and Puerto Rican norms, but average on the Peruvian norms. Significant inverse associations between PbB/ZPP concentrations and RCPM standard scores using the U.S., Puerto Rican, and Peruvian norms were observed, indicating decreasing RCPM test performance with increasing PbB and ZPP levels. RCPM scores were significantly correlated with performance on the Digit Span test for auditory memory. Mean Digit Span scale score was less than average, suggesting auditory memory/attention deficits. In conclusion, both the RCPM and Digit Span tests were found to be effective instruments for field screening of visual-spatial reasoning and auditory memory abilities, respectively, in Pb-exposed Andean adolescents and young adults.
Callan, Daniel E; Durantin, Gautier; Terzibas, Cengiz
2015-01-01
Application of neuro-augmentation technology based on dry-wireless EEG may be considerably beneficial for aviation and space operations because of the inherent dangers involved. In this study we evaluate classification performance of perceptual events using a dry-wireless EEG system during motion platform based flight simulation and actual flight in an open cockpit biplane to determine if the system can be used in the presence of considerable environmental and physiological artifacts. A passive task involving 200 random auditory presentations of a chirp sound was used for evaluation. The advantage of this auditory task is that it does not interfere with the perceptual motor processes involved with piloting the plane. Classification was based on identifying the presentation of a chirp sound vs. silent periods. Evaluation of Independent component analysis (ICA) and Kalman filtering to enhance classification performance by extracting brain activity related to the auditory event from other non-task related brain activity and artifacts was assessed. The results of permutation testing revealed that single trial classification of presence or absence of an auditory event was significantly above chance for all conditions on a novel test set. The best performance could be achieved with both ICA and Kalman filtering relative to no processing: Platform Off (83.4% vs. 78.3%), Platform On (73.1% vs. 71.6%), Biplane Engine Off (81.1% vs. 77.4%), and Biplane Engine On (79.2% vs. 66.1%). This experiment demonstrates that dry-wireless EEG can be used in environments with considerable vibration, wind, acoustic noise, and physiological artifacts and achieve good single trial classification performance that is necessary for future successful application of neuro-augmentation technology based on brain-machine interfaces.
NASA Astrophysics Data System (ADS)
Wang, Lei; Wright, C. David; Aziz, Mustafa. M.; Yang, Ci Hui; Yang, Guo Wei
2014-11-01
The capping layer and the probe tip that serve as the protective layer and the recording tool, respectively, for phase-change probe memory play an important role on the writing performance of phase-change probe memory, thus receiving considerable attention. On the other hand, their influence on the readout performance of phasechange probe memory has rarely been reported before. A three-dimensional parametric study based on the Laplace equation was therefore conducted to investigate the effect of the capping layer and the probe tip on the resulting reading contrast for the two cases of reading a crystalline bit from an amorphous matrix and reading an amorphous bit from a crystalline matrix. The results indicated that a capping layer with a thickness of 2 nm and an electrical conductivity of 50 Ω-1m-1 is able to provide an appropriate reading contrast for both the cases, while satisfying the previous writing requirement, particularly with the assistance of a platinum silicide probe tip.
Detecting wrong notes in advance: neuronal correlates of error monitoring in pianists.
Ruiz, María Herrojo; Jabusch, Hans-Christian; Altenmüller, Eckart
2009-11-01
Music performance is an extremely rapid process with low incidence of errors even at the fast rates of production required. This is possible only due to the fast functioning of the self-monitoring system. Surprisingly, no specific data about error monitoring have been published in the music domain. Consequently, the present study investigated the electrophysiological correlates of executive control mechanisms, in particular error detection, during piano performance. Our target was to extend the previous research efforts on understanding of the human action-monitoring system by selecting a highly skilled multimodal task. Pianists had to retrieve memorized music pieces at a fast tempo in the presence or absence of auditory feedback. Our main interest was to study the interplay between auditory and sensorimotor information in the processes triggered by an erroneous action, considering only wrong pitches as errors. We found that around 70 ms prior to errors a negative component is elicited in the event-related potentials and is generated by the anterior cingulate cortex. Interestingly, this component was independent of the auditory feedback. However, the auditory information did modulate the processing of the errors after their execution, as reflected in a larger error positivity (Pe). Our data are interpreted within the context of feedforward models and the auditory-motor coupling.
A physiologically based model for temporal envelope encoding in human primary auditory cortex.
Dugué, Pierre; Le Bouquin-Jeannès, Régine; Edeline, Jean-Marc; Faucon, Gérard
2010-09-01
Communication sounds exhibit temporal envelope fluctuations in the low frequency range (<70 Hz) and human speech has prominent 2-16 Hz modulations with a maximum at 3-4 Hz. Here, we propose a new phenomenological model of the human auditory pathway (from cochlea to primary auditory cortex) to simulate responses to amplitude-modulated white noise. To validate the model, performance was estimated by quantifying temporal modulation transfer functions (TMTFs). Previous models considered either the lower stages of the auditory system (up to the inferior colliculus) or only the thalamocortical loop. The present model, divided in two stages, is based on anatomical and physiological findings and includes the entire auditory pathway. The first stage, from the outer ear to the colliculus, incorporates inhibitory interneurons in the cochlear nucleus to increase performance at high stimuli levels. The second stage takes into account the anatomical connections of the thalamocortical system and includes the fast and slow excitatory and inhibitory currents. After optimizing the parameters of the model to reproduce the diversity of TMTFs obtained from human subjects, a patient-specific model was derived and the parameters were optimized to effectively reproduce both spontaneous activity and the oscillatory part of the evoked response. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Patterns of language and auditory dysfunction in 6-year-old children with epilepsy.
Selassie, Gunilla Rejnö-Habte; Olsson, Ingrid; Jennische, Margareta
2009-01-01
In a previous study we reported difficulty with expressive language and visuoperceptual ability in preschool children with epilepsy and otherwise normal development. The present study analysed speech and language dysfunction for each individual in relation to epilepsy variables, ear preference, and intelligence in these children and described their auditory function. Twenty 6-year-old children with epilepsy (14 females, 6 males; mean age 6:5 y, range 6 y-6 y 11 mo) and 30 reference children without epilepsy (18 females, 12 males; mean age 6:5 y, range 6 y-6 y 11 mo) were assessed for language and auditory ability. Low scores for the children with epilepsy were analysed with respect to speech-language domains, type of epilepsy, site of epileptiform activity, intelligence, and language laterality. Auditory attention, perception, discrimination, and ear preference were measured with a dichotic listening test, and group comparisons were performed. Children with left-sided partial epilepsy had extensive language dysfunction. Most children with partial epilepsy had phonological dysfunction. Language dysfunction was also found in children with generalized and unclassified epilepsies. The children with epilepsy performed significantly worse than the reference children in auditory attention, perception of vowels and discrimination of consonants for the right ear and had more left ear advantage for vowels, indicating undeveloped language laterality.
Evidence for multisensory spatial-to-motor transformations in aiming movements of children.
King, Bradley R; Kagerer, Florian A; Contreras-Vidal, Jose L; Clark, Jane E
2009-01-01
The extant developmental literature investigating age-related differences in the execution of aiming movements has predominantly focused on visuomotor coordination, despite the fact that additional sensory modalities, such as audition and somatosensation, may contribute to motor planning, execution, and learning. The current study investigated the execution of aiming movements toward both visual and acoustic stimuli. In addition, we examined the interaction between visuomotor and auditory-motor coordination as 5- to 10-yr-old participants executed aiming movements to visual and acoustic stimuli before and after exposure to a visuomotor rotation. Children in all age groups demonstrated significant improvement in performance under the visuomotor perturbation, as indicated by decreased initial directional and root mean squared errors. Moreover, children in all age groups demonstrated significant visual aftereffects during the postexposure phase, suggesting a successful update of their spatial-to-motor transformations. Interestingly, these updated spatial-to-motor transformations also influenced auditory-motor performance, as indicated by distorted movement trajectories during the auditory postexposure phase. The distorted trajectories were present during auditory postexposure even though the auditory-motor relationship was not manipulated. Results suggest that by the age of 5 yr, children have developed a multisensory spatial-to-motor transformation for the execution of aiming movements toward both visual and acoustic targets.
Making and monitoring errors based on altered auditory feedback
Pfordresher, Peter Q.; Beasley, Robertson T. E.
2014-01-01
Previous research has demonstrated that altered auditory feedback (AAF) disrupts music performance and causes disruptions in both action planning and the perception of feedback events. It has been proposed that this disruption occurs because of interference within a shared representation for perception and action (Pfordresher, 2006). Studies reported here address this claim from the standpoint of error monitoring. In Experiment 1 participants performed short melodies on a keyboard while hearing no auditory feedback, normal auditory feedback, or alterations to feedback pitch on some subset of events. Participants overestimated error frequency when AAF was present but not for normal feedback. Experiment 2 introduced a concurrent load task to determine whether error monitoring requires executive resources. Although the concurrent task enhanced the effect of AAF, it did not alter participants’ tendency to overestimate errors when AAF was present. A third correlational study addressed whether effects of AAF are reduced for a subset of the population who may lack the kind of perception/action associations that lead to AAF disruption: poor-pitch singers. Effects of manipulations similar to those presented in Experiments 1 and 2 were reduced for these individuals. We propose that these results are consistent with the notion that AAF interference is based on associations between perception and action within a forward internal model of auditory-motor relationships. PMID:25191294
New Perspectives on Assessing Amplification Effects
Souza, Pamela E.; Tremblay, Kelly L.
2006-01-01
Clinicians have long been aware of the range of performance variability with hearing aids. Despite improvements in technology, there remain many instances of well-selected and appropriately fitted hearing aids whereby the user reports minimal improvement in speech understanding. This review presents a multistage framework for understanding how a hearing aid affects performance. Six stages are considered: (1) acoustic content of the signal, (2) modification of the signal by the hearing aid, (3) interaction between sound at the output of the hearing aid and the listener's ear, (4) integrity of the auditory system, (5) coding of available acoustic cues by the listener's auditory system, and (6) correct identification of the speech sound. Within this framework, this review describes methodology and research on 2 new assessment techniques: acoustic analysis of speech measured at the output of the hearing aid and auditory evoked potentials recorded while the listener wears hearing aids. Acoustic analysis topics include the relationship between conventional probe microphone tests and probe microphone measurements using speech, appropriate procedures for such tests, and assessment of signal-processing effects on speech acoustics and recognition. Auditory evoked potential topics include an overview of physiologic measures of speech processing and the effect of hearing loss and hearing aids on cortical auditory evoked potential measurements in response to speech. Finally, the clinical utility of these procedures is discussed. PMID:16959734
Hanson, Mark D; Szatmari, Peter; Eva, Kevin W
2011-01-01
The authors evaluated the differential impact of clerk interest and participation in a Child and Adolescent Psychiatry (CAP) clerkship rotation upon psychiatry and pediatrics residency matches. Authors studied clerks from the McMaster University M.D. program graduating years of 2005-2007. Participants were categorized as 1) clerks with CAP clerkship interest and CAP clerkship participation; 2) clerks with CAP clerkship interest but without CAP clerkship participation; and 3) clerks with neither CAP clerkship interest nor CAP clerkship participation. The outcome variable was residency matches, with Psychiatry and Pediatrics residency matches highlighted. Descriptive statistics were used, and chi-squared tests performed to compare proportions of residency matches across these three clerkship groups. Residency matches of 390 clerks were reviewed. CAP clerkship interest was expressed by 23.9% of clerks. Comparison across the two CAP clerkship interest groups revealed match rates to Psychiatry and Pediatrics not to be significantly different, although the proportion of each match was significantly different from the third clerkship group (without CAP clerkship interest) in both instances. CAP clerkship interest, but not participation, was associated with Psychiatry and Pediatrics residency matches. CAP clerkship interest among clerks presents recruitment and educational opportunities; a recruitment opportunity for clerks heading toward a Psychiatry residency, and an educational opportunity for clerks heading toward a Pediatrics residency.
Neuroanatomical and resting state EEG power correlates of central hearing loss in older adults.
Giroud, Nathalie; Hirsiger, Sarah; Muri, Raphaela; Kegel, Andrea; Dillier, Norbert; Meyer, Martin
2018-01-01
To gain more insight into central hearing loss, we investigated the relationship between cortical thickness and surface area, speech-relevant resting state EEG power, and above-threshold auditory measures in older adults and younger controls. Twenty-three older adults and 13 younger controls were tested with an adaptive auditory test battery to measure not only traditional pure-tone thresholds, but also above individual thresholds of temporal and spectral processing. The participants' speech recognition in noise (SiN) was evaluated, and a T1-weighted MRI image obtained for each participant. We then determined the cortical thickness (CT) and mean cortical surface area (CSA) of auditory and higher speech-relevant regions of interest (ROIs) with FreeSurfer. Further, we obtained resting state EEG from all participants as well as data on the intrinsic theta and gamma power lateralization, the latter in accordance with predictions of the Asymmetric Sampling in Time hypothesis regarding speech processing (Poeppel, Speech Commun 41:245-255, 2003). Methodological steps involved the calculation of age-related differences in behavior, anatomy and EEG power lateralization, followed by multiple regressions with anatomical ROIs as predictors for auditory performance. We then determined anatomical regressors for theta and gamma lateralization, and further constructed all regressions to investigate age as a moderator variable. Behavioral results indicated that older adults performed worse in temporal and spectral auditory tasks, and in SiN, despite having normal peripheral hearing as signaled by the audiogram. These behavioral age-related distinctions were accompanied by lower CT in all ROIs, while CSA was not different between the two age groups. Age modulated the regressions specifically in right auditory areas, where a thicker cortex was associated with better auditory performance in older adults. Moreover, a thicker right supratemporal sulcus predicted more rightward theta lateralization, indicating the functional relevance of the right auditory areas in older adults. The question how age-related cortical thinning and intrinsic EEG architecture relates to central hearing loss has so far not been addressed. Here, we provide the first neuroanatomical and neurofunctional evidence that cortical thinning and lateralization of speech-relevant frequency band power relates to the extent of age-related central hearing loss in older adults. The results are discussed within the current frameworks of speech processing and aging.
Partially Overlapping Brain Networks for Singing and Cello Playing.
Segado, Melanie; Hollinger, Avrum; Thibodeau, Joseph; Penhune, Virginia; Zatorre, Robert J
2018-01-01
This research uses an MR-Compatible cello to compare functional brain activation during singing and cello playing within the same individuals to determine the extent to which arbitrary auditory-motor associations, like those required to play the cello, co-opt functional brain networks that evolved for singing. Musical instrument playing and singing both require highly specific associations between sounds and movements. Because these are both used to produce musical sounds, it is often assumed in the literature that their neural underpinnings are highly similar. However, singing is an evolutionarily old human trait, and the auditory-motor associations used for singing are also used for speech and non-speech vocalizations. This sets it apart from the arbitrary auditory-motor associations required to play musical instruments. The pitch range of the cello is similar to that of the human voice, but cello playing is completely independent of the vocal apparatus, and can therefore be used to dissociate the auditory-vocal network from that of the auditory-motor network. While in the MR-Scanner, 11 expert cellists listened to and subsequently produced individual tones either by singing or cello playing. All participants were able to sing and play the target tones in tune (<50C deviation from target). We found that brain activity during cello playing directly overlaps with brain activity during singing in many areas within the auditory-vocal network. These include primary motor, dorsal pre-motor, and supplementary motor cortices (M1, dPMC, SMA),the primary and periprimary auditory cortices within the superior temporal gyrus (STG) including Heschl's gyrus, anterior insula (aINS), anterior cingulate cortex (ACC), and intraparietal sulcus (IPS), and Cerebellum but, notably, exclude the periaqueductal gray (PAG) and basal ganglia (Putamen). Second, we found that activity within the overlapping areas is positively correlated with, and therefore likely contributing to, both singing and playing in tune determined with performance measures. Third, we found that activity in auditory areas is functionally connected with activity in dorsal motor and pre-motor areas, and that the connectivity between them is positively correlated with good performance on this task. This functional connectivity suggests that the brain areas are working together to contribute to task performance and not just coincidently active. Last, our findings showed that cello playing may directly co-opt vocal areas (including larynx area of motor cortex), especially if musical training begins before age 7.
Partially Overlapping Brain Networks for Singing and Cello Playing
Segado, Melanie; Hollinger, Avrum; Thibodeau, Joseph; Penhune, Virginia; Zatorre, Robert J.
2018-01-01
This research uses an MR-Compatible cello to compare functional brain activation during singing and cello playing within the same individuals to determine the extent to which arbitrary auditory-motor associations, like those required to play the cello, co-opt functional brain networks that evolved for singing. Musical instrument playing and singing both require highly specific associations between sounds and movements. Because these are both used to produce musical sounds, it is often assumed in the literature that their neural underpinnings are highly similar. However, singing is an evolutionarily old human trait, and the auditory-motor associations used for singing are also used for speech and non-speech vocalizations. This sets it apart from the arbitrary auditory-motor associations required to play musical instruments. The pitch range of the cello is similar to that of the human voice, but cello playing is completely independent of the vocal apparatus, and can therefore be used to dissociate the auditory-vocal network from that of the auditory-motor network. While in the MR-Scanner, 11 expert cellists listened to and subsequently produced individual tones either by singing or cello playing. All participants were able to sing and play the target tones in tune (<50C deviation from target). We found that brain activity during cello playing directly overlaps with brain activity during singing in many areas within the auditory-vocal network. These include primary motor, dorsal pre-motor, and supplementary motor cortices (M1, dPMC, SMA),the primary and periprimary auditory cortices within the superior temporal gyrus (STG) including Heschl's gyrus, anterior insula (aINS), anterior cingulate cortex (ACC), and intraparietal sulcus (IPS), and Cerebellum but, notably, exclude the periaqueductal gray (PAG) and basal ganglia (Putamen). Second, we found that activity within the overlapping areas is positively correlated with, and therefore likely contributing to, both singing and playing in tune determined with performance measures. Third, we found that activity in auditory areas is functionally connected with activity in dorsal motor and pre-motor areas, and that the connectivity between them is positively correlated with good performance on this task. This functional connectivity suggests that the brain areas are working together to contribute to task performance and not just coincidently active. Last, our findings showed that cello playing may directly co-opt vocal areas (including larynx area of motor cortex), especially if musical training begins before age 7. PMID:29892211
2012-01-01
Background Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues) have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user’s mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. Methods 10 male subjects (26+/-years old), participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF), Visual Feedback only control (VF), and Audiovisual Feedback control (AVF). For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject’s EEG, ECG, electro-dermal activity (EDA), and respiration rate were measured. Results The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback). Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. Conclusions The performance improvements when using auditory cues, along with vision (multimodal feedback), can be attributed to a reduced attentional demand during the task, which can be attributed to a visual “pop-out” or enhance effect. Also, the NASA TLX, the EEG’s Alpha and Beta band, and the Heart Rate could be used to further evaluate sensory feedback systems in prosthetic applications. PMID:22682425
Controlling contamination in Mo/Si multilayer mirrors by Si surface capping modifications
NASA Astrophysics Data System (ADS)
Malinowski, Michael E.; Steinhaus, Chip; Clift, W. Miles; Klebanoff, Leonard E.; Mrowka, Stanley; Soufli, Regina
2002-07-01
The performance of Mo/Si multilayer mirrors (MLMs) used to reflect UV (EUV) radiation in an EUV + hydrocarbon (NC) vapor environment can be improved by optimizing the silicon capping layer thickness on the MLM in order to minimize the initial buildup of carbon on MLMs. Carbon buildup is undesirable since it can absorb EUV radiation and reduce MLM reflectivity. A set of Mo/Si MLMs deposited on Si wafers was fabricated such that each MLM had a different Si capping layer thickness ranging form 2 nm to 7 nm. Samples from each MLM wafer were exposed to a combination of EUV light + (HC) vapors at the Advanced Light Source (ALS) synchrotron in order to determine if the Si capping layer thickness affected the carbon buildup on the MLMs. It was found that the capping layer thickness had a major influence on this 'carbonizing' tendency, with the 3 nm layer thickness providing the best initial resistance to carbonizing and accompanying EUV reflectivity loss in the MLM. The Si capping layer thickness deposited on a typical EUV optic is 4.3 nm. Measurements of the absolute reflectivities performed on the Calibration and Standards beamline at the ALS indicated the EUV reflectivity of the 3 nm-capped MLM was actually slightly higher than that of the normal, 4 nm Si-capped sample. These results show that he use of a 3 nm capping layer represents an improvement over the 4 nm layer since the 3 nm has both a higher absolute reflectivity and better initial resistance to carbon buildup. The results also support the general concept of minimizing the electric field intensity at the MLM surface to minimize photoelectron production and, correspondingly, carbon buildup in a EUV + HC vapor environment.
MEGALEX: A megastudy of visual and auditory word recognition.
Ferrand, Ludovic; Méot, Alain; Spinelli, Elsa; New, Boris; Pallier, Christophe; Bonin, Patrick; Dufau, Stéphane; Mathôt, Sebastiaan; Grainger, Jonathan
2018-06-01
Using the megastudy approach, we report a new database (MEGALEX) of visual and auditory lexical decision times and accuracy rates for tens of thousands of words. We collected visual lexical decision data for 28,466 French words and the same number of pseudowords, and auditory lexical decision data for 17,876 French words and the same number of pseudowords (synthesized tokens were used for the auditory modality). This constitutes the first large-scale database for auditory lexical decision, and the first database to enable a direct comparison of word recognition in different modalities. Different regression analyses were conducted to illustrate potential ways to exploit this megastudy database. First, we compared the proportions of variance accounted for by five word frequency measures. Second, we conducted item-level regression analyses to examine the relative importance of the lexical variables influencing performance in the different modalities (visual and auditory). Finally, we compared the similarities and differences between the two modalities. All data are freely available on our website ( https://sedufau.shinyapps.io/megalex/ ) and are searchable at www.lexique.org , inside the Open Lexique search engine.
Chhabra, Harleen; Sowmya, Selvaraj; Sreeraj, Vanteemar S; Kalmady, Sunil V; Shivakumar, Venkataram; Amaresha, Anekal C; Narayanaswamy, Janardhanan C; Venkatasubramanian, Ganesan
2016-12-01
Auditory hallucinations constitute an important symptom component in 70-80% of schizophrenia patients. These hallucinations are proposed to occur due to an imbalance between perceptual expectation and external input, resulting in attachment of meaning to abstract noises; signal detection theory has been proposed to explain these phenomena. In this study, we describe the development of an auditory signal detection task using a carefully chosen set of English words that could be tested successfully in schizophrenia patients coming from varying linguistic, cultural and social backgrounds. Schizophrenia patients with significant auditory hallucinations (N=15) and healthy controls (N=15) performed the auditory signal detection task wherein they were instructed to differentiate between a 5-s burst of plain white noise and voiced-noise. The analysis showed that false alarms (p=0.02), discriminability index (p=0.001) and decision bias (p=0.004) were significantly different between the two groups. There was a significant negative correlation between false alarm rate and decision bias. These findings extend further support for impaired perceptual expectation system in schizophrenia patients. Copyright © 2016 Elsevier B.V. All rights reserved.
A psychophysiological evaluation of the perceived urgency of auditory warning signals
NASA Technical Reports Server (NTRS)
Burt, J. L.; Bartolome, D. S.; Burdette, D. W.; Comstock, J. R. Jr
1995-01-01
One significant concern that pilots have about cockpit auditory warnings is that the signals presently used lack a sense of priority. The relationship between auditory warning sound parameters and perceived urgency is, therefore, an important topic of enquiry in aviation psychology. The present investigation examined the relationship among subjective assessments of urgency, reaction time, and brainwave activity with three auditory warning signals. Subjects performed a tracking task involving automated and manual conditions, and were presented with auditory warnings having various levels of perceived and situational urgency. Subjective assessments revealed that subjects were able to rank warnings on an urgency scale, but rankings were altered after warnings were mapped to a situational urgency scale. Reaction times differed between automated and manual tracking task conditions, and physiological data showed attentional differences in response to perceived and situational warning urgency levels. This study shows that the use of physiological measures sensitive to attention and arousal, in conjunction with behavioural and subjective measures, may lead to the design of auditory warnings that produce a sense of urgency in an operator that matches the urgency of the situation.
Hale, Matthew D; Zaman, Arshad; Morrall, Matthew C H J; Chumas, Paul; Maguire, Melissa J
2018-03-01
Presurgical evaluation for temporal lobe epilepsy routinely assesses speech and memory lateralization and anatomic localization of the motor and visual areas but not baseline musical processing. This is paramount in a musician. Although validated tools exist to assess musical ability, there are no reported functional magnetic resonance imaging (fMRI) paradigms to assess musical processing. We examined the utility of a novel fMRI paradigm in an 18-year-old left-handed pianist who underwent surgery for a left temporal low-grade ganglioglioma. Preoperative evaluation consisted of neuropsychological evaluation, T1-weighted and T2-weighted magnetic resonance imaging, and fMRI. Auditory blood oxygen level-dependent fMRI was performed using a dedicated auditory scanning sequence. Three separate auditory investigations were conducted: listening to, humming, and thinking about a musical piece. All auditory fMRI paradigms activated the primary auditory cortex with varying degrees of auditory lateralization. Thinking about the piece additionally activated the primary visual cortices (bilaterally) and right dorsolateral prefrontal cortex. Humming demonstrated left-sided predominance of auditory cortex activation with activity observed in close proximity to the tumor. This study demonstrated an fMRI paradigm for evaluating musical processing that could form part of preoperative assessment for patients undergoing temporal lobe surgery for epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.
Wolak, Tomasz; Cieśla, Katarzyna; Rusiniak, Mateusz; Piłka, Adam; Lewandowska, Monika; Pluta, Agnieszka; Skarżyński, Henryk; Skarżyński, Piotr H
2016-11-28
BACKGROUND The goal of the fMRI experiment was to explore the involvement of central auditory structures in pathomechanisms of a behaviorally manifested auditory temporary threshold shift in humans. MATERIAL AND METHODS The material included 18 healthy volunteers with normal hearing. Subjects in the exposure group were presented with 15 min of binaural acoustic overstimulation of narrowband noise (3 kHz central frequency) at 95 dB(A). The control group was not exposed to noise but instead relaxed in silence. Auditory fMRI was performed in 1 session before and 3 sessions after acoustic overstimulation and involved 3.5-4.5 kHz sweeps. RESULTS The outcomes of the study indicate a possible effect of acoustic overstimulation on central processing, with decreased brain responses to auditory stimulation up to 20 min after exposure to noise. The effect can be seen already in the primary auditory cortex. Decreased BOLD signal change can be due to increased excitation thresholds and/or increased spontaneous activity of auditory neurons throughout the auditory system. CONCLUSIONS The trial shows that fMRI can be a valuable tool in acoustic overstimulation studies but has to be used with caution and considered complimentary to audiological measures. Further methodological improvements are needed to distinguish the effects of TTS and neuronal habituation to repetitive stimulation.