Sample records for deducing auditory capability

  1. A generalized time-frequency subtraction method for robust speech enhancement based on wavelet filter banks modeling of human auditory system.

    PubMed

    Shao, Yu; Chang, Chip-Hong

    2007-08-01

    We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods.

  2. Auditory motion-specific mechanisms in the primate brain

    PubMed Central

    Baumann, Simon; Dheerendra, Pradeep; Joly, Olivier; Hunter, David; Balezeau, Fabien; Sun, Li; Rees, Adrian; Petkov, Christopher I.; Thiele, Alexander; Griffiths, Timothy D.

    2017-01-01

    This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream. PMID:28472038

  3. [Auditory training with wide-band white noise: effects on the recruitment (III)].

    PubMed

    Domínguez Ugidos, L J; Rodríguez Morejón, C; Vallés Varela, H; Iparraguirre Bolinaga, V; Knaster del Olmo, J

    2001-05-01

    The auditory training with wide-band white noise is a methodology for the qualitative recovery of the hearing loss in people suffering from sensorineural hearing loss. It is based on the application of a wide-band white modified noise. In a prospective study, we have assessed the modifications of the recruitment coefficient in a sample of 48 patients who have followed a program of 15 auditory training with wide-band white noise sessions. The average improvement of the recruitment coefficient expressed in percentage is a 7.7498%, which comes up to 23.5249% in the case of a binaural recruitment coefficient. From our results, it can be deduced that the auditory training with wide-band white noise reduces the recruitment. That is to say, the decrease of the recruitment in high intensities both binaurally and in all ears.

  4. Assessing residual reasoning ability in overtly non-communicative patients using fMRI☆

    PubMed Central

    Hampshire, Adam; Parkin, Beth L.; Cusack, Rhodri; Espejo, Davinia Fernández; Allanson, Judith; Kamau, Evelyn; Pickard, John D.; Owen, Adrian M.

    2012-01-01

    It is now well established that some patients who are diagnosed as being in a vegetative state or a minimally conscious state show reliable signs of volition that may only be detected by measuring neural responses. A pertinent question is whether these patients are also capable of logical thought. Here, we validate an fMRI paradigm that can detect the neural fingerprint of reasoning processes and moreover, can confirm whether a participant derives logical answers. We demonstrate the efficacy of this approach in a physically non-communicative patient who had been shown to engage in mental imagery in response to simple auditory instructions. Our results demonstrate that this individual retains a remarkable capacity for higher cognition, engaging in the reasoning task and deducing logical answers. We suggest that this approach is suitable for detecting residual reasoning ability using neural responses and could readily be adapted to assess other aspects of cognition. PMID:24179769

  5. Developmental Trends in Auditory Processing Can Provide Early Predictions of Language Acquisition in Young Infants

    ERIC Educational Resources Information Center

    Chonchaiya, Weerasak; Tardif, Twila; Mai, Xiaoqin; Xu, Lin; Li, Mingyan; Kaciroti, Niko; Kileny, Paul R.; Shao, Jie; Lozoff, Betsy

    2013-01-01

    Auditory processing capabilities at the subcortical level have been hypothesized to impact an individual's development of both language and reading abilities. The present study examined whether auditory processing capabilities relate to language development in healthy 9-month-old infants. Participants were 71 infants (31 boys and 40 girls) with…

  6. Accuracy of Noninvasive Estimation Techniques for the State of the Cochlear Amplifier

    NASA Astrophysics Data System (ADS)

    Dalhoff, Ernst; Gummer, Anthony W.

    2011-11-01

    Estimation of the function of the cochlea in human is possible only by deduction from indirect measurements, which may be subjective or objective. Therefore, for basic research as well as diagnostic purposes, it is important to develop methods to deduce and analyse error sources of cochlear-state estimation techniques. Here, we present a model of technical and physiologic error sources contributing to the estimation accuracy of hearing threshold and the state of the cochlear amplifier and deduce from measurements of human that the estimated standard deviation can be considerably below 6 dB. Experimental evidence is drawn from two partly independent objective estimation techniques for the auditory signal chain based on measurements of otoacoustic emissions.

  7. Behavioral semantics of learning and crossmodal processing in auditory cortex: the semantic processor concept.

    PubMed

    Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram

    2011-01-01

    Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of different excitatory and inhibitory mechanisms and to distinct spatiotemporal metrics of map activation to represent a sound. The described non-auditory firing and modulations of auditory responses suggest that auditory cortex, by collecting all necessary information, functions as a "semantic processor" deducing the task-specific meaning of sounds by learning. © 2010. Published by Elsevier B.V.

  8. Human-assisted sound event recognition for home service robots.

    PubMed

    Do, Ha Manh; Sheng, Weihua; Liu, Meiqin

    This paper proposes and implements an open framework of active auditory learning for a home service robot to serve the elderly living alone at home. The framework was developed to realize the various auditory perception capabilities while enabling a remote human operator to involve in the sound event recognition process for elderly care. The home service robot is able to estimate the sound source position and collaborate with the human operator in sound event recognition while protecting the privacy of the elderly. Our experimental results validated the proposed framework and evaluated auditory perception capabilities and human-robot collaboration in sound event recognition.

  9. SDI Software Technology Program Plan Version 1.5

    DTIC Science & Technology

    1987-06-01

    computer generation of auditory communication of meaningful speech. Most speech synthesizers are based on mathematical models of the human vocal tract, but...oral/ auditory and multimodal communications. Although such state-of-the-art interaction technology has not fully matured, user experience has...superior I pattern matching capabilities and the subliminal intuitive deduction capability. The error performance of humans can be helped by careful

  10. Changes in the Adult Vertebrate Auditory Sensory Epithelium After Trauma

    PubMed Central

    Oesterle, Elizabeth C.

    2012-01-01

    Auditory hair cells transduce sound vibrations into membrane potential changes, ultimately leading to changes in neuronal firing and sound perception. This review provides an overview of the characteristics and repair capabilities of traumatized auditory sensory epithelium in the adult vertebrate ear. Injured mammalian auditory epithelium repairs itself by forming permanent scars but is unable to regenerate replacement hair cells. In contrast, injured non-mammalian vertebrate ear generates replacement hair cells to restore hearing functions. Non-sensory support cells within the auditory epithelium play key roles in the repair processes. PMID:23178236

  11. Interaction of language, auditory and memory brain networks in auditory verbal hallucinations.

    PubMed

    Ćurčić-Blake, Branislava; Ford, Judith M; Hubl, Daniela; Orlov, Natasza D; Sommer, Iris E; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W; David, Olivier; Mulert, Christoph; Woodward, Todd S; Aleman, André

    2017-01-01

    Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of particular relevance. However, reconciliation of these theories with experimental evidence is missing. We review 50 studies investigating functional (EEG and fMRI) and anatomic (diffusion tensor imaging) connectivity in these networks, and explore the evidence supporting abnormal connectivity in these networks associated with AVH. We distinguish between functional connectivity during an actual hallucination experience (symptom capture) and functional connectivity during either the resting state or a task comparing individuals who hallucinate with those who do not (symptom association studies). Symptom capture studies clearly reveal a pattern of increased coupling among the auditory, language and striatal regions. Anatomical and symptom association functional studies suggest that the interhemispheric connectivity between posterior auditory regions may depend on the phase of illness, with increases in non-psychotic individuals and first episode patients and decreases in chronic patients. Leading hypotheses involving concepts as unstable memories, source monitoring, top-down attention, and hybrid models of hallucinations are supported in part by the published connectivity data, although several caveats and inconsistencies remain. Specifically, possible changes in fronto-temporal connectivity are still under debate. Precise hypotheses concerning the directionality of connections deduced from current theoretical approaches should be tested using experimental approaches that allow for discrimination of competing hypotheses. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Virtual acoustics displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-01-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  13. Virtual acoustics displays

    NASA Astrophysics Data System (ADS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-03-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  14. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.

    PubMed

    Bigelow, James; Poremba, Amy

    2014-01-01

    Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.

  15. Audiologic Management of Older Adults With Hearing Loss and Compromised Cognitive/Psychoacoustic Auditory Processing Capabilities

    PubMed Central

    Kricos, Patricia B.

    2006-01-01

    The number and proportion of older adults in the United States population is increasing, and more clinical audiologists will be called upon to deliver hearing care to the approximately 35% to 50% of them who experience hearing difficulties. In recent years, the characteristics and sources of receptive communication difficulties in older individuals have been investigated by hearing scientists, cognitive psychologists, and audiologists. It is becoming increasingly apparent that cognitive compromises and psychoacoustic auditory processing disorders associated with aging may contribute to communication difficulties in this population. This paper presents an overview of best practices, based on our current knowledge base, for clinical management of older individuals with limitations in cognitive or psychoacoustic auditory processing capabilities, or both, that accompany aging. PMID:16528428

  16. How the songbird brain listens to its own songs

    NASA Astrophysics Data System (ADS)

    Hahnloser, Richard

    2010-03-01

    Songbirds are capable of vocal learning and communication and are ideally suited to the study of neural mechanisms of auditory feedback processing. When a songbird is deafened in the early sensorimotor phase after tutoring, it fails to imitate the song of its tutor and develops a highly aberrant song. It is also known that birds are capable of storing a long-term memory of tutor song and that they need intact auditory feedback to match their own vocalizations to the tutor's song. Based on these behavioral observations, we investigate feedback processing in single auditory forebrain neurons of juvenile zebra finches that are in a late developmental stage of song learning. We implant birds with miniature motorized microdrives that allow us to record the electrical activity of single neurons while birds are freely moving and singing in their cages. Occasionally, we deliver a brief sound through a loudspeaker to perturb the auditory feedback the bird experiences during singing. These acoustic perturbations of auditory feedback reveal complex sensitivity that cannot be predicted from passive playback responses. Some neurons are highly feedback sensitive in that they respond vigorously to song perturbations, but not to unperturbed songs or perturbed playback. These findings suggest that a computational function of forebrain auditory areas may be to detect errors between actual feedback and mirrored feedback deriving from an internal model of the bird's own song or that of its tutor.

  17. Measures of Working Memory, Sequence Learning, and Speech Recognition in the Elderly.

    ERIC Educational Resources Information Center

    Humes, Larry E.; Floyd, Shari S.

    2005-01-01

    This study describes the measurement of 2 cognitive functions, working-memory capacity and sequence learning, in 2 groups of listeners: young adults with normal hearing and elderly adults with impaired hearing. The measurement of these 2 cognitive abilities with a unique, nonverbal technique capable of auditory, visual, and auditory-visual…

  18. Mechanisms underlying the temporal precision of sound coding at the inner hair cell ribbon synapse

    PubMed Central

    Moser, Tobias; Neef, Andreas; Khimich, Darina

    2006-01-01

    Our auditory system is capable of perceiving the azimuthal location of a low frequency sound source with a precision of a few degrees. This requires the auditory system to detect time differences in sound arrival between the two ears down to tens of microseconds. The detection of these interaural time differences relies on network computation by auditory brainstem neurons sharpening the temporal precision of the afferent signals. Nevertheless, the system requires the hair cell synapse to encode sound with the highest possible temporal acuity. In mammals, each auditory nerve fibre receives input from only one inner hair cell (IHC) synapse. Hence, this single synapse determines the temporal precision of the fibre. As if this was not enough of a challenge, the auditory system is also capable of maintaining such high temporal fidelity with acoustic signals that vary greatly in their intensity. Recent research has started to uncover the cellular basis of sound coding. Functional and structural descriptions of synaptic vesicle pools and estimates for the number of Ca2+ channels at the ribbon synapse have been obtained, as have insights into how the receptor potential couples to the release of synaptic vesicles. Here, we review current concepts about the mechanisms that control the timing of transmitter release in inner hair cells of the cochlea. PMID:16901948

  19. Developmental trends in auditory processing can provide early predictions of language acquisition in young infants.

    PubMed

    Chonchaiya, Weerasak; Tardif, Twila; Mai, Xiaoqin; Xu, Lin; Li, Mingyan; Kaciroti, Niko; Kileny, Paul R; Shao, Jie; Lozoff, Betsy

    2013-03-01

    Auditory processing capabilities at the subcortical level have been hypothesized to impact an individual's development of both language and reading abilities. The present study examined whether auditory processing capabilities relate to language development in healthy 9-month-old infants. Participants were 71 infants (31 boys and 40 girls) with both Auditory Brainstem Response (ABR) and language assessments. At 6 weeks and/or 9 months of age, the infants underwent ABR testing using both a standard hearing screening protocol with 30 dB clicks and a second protocol using click pairs separated by 8, 16, and 64-ms intervals presented at 80 dB. We evaluated the effects of interval duration on ABR latency and amplitude elicited by the second click. At 9 months, language development was assessed via parent report on the Chinese Communicative Development Inventory - Putonghua version (CCDI-P). Wave V latency z-scores of the 64-ms condition at 6 weeks showed strong direct relationships with Wave V latency in the same condition at 9 months. More importantly, shorter Wave V latencies at 9 months showed strong relationships with the CCDI-P composite consisting of phrases understood, gestures, and words produced. Likewise, infants who had greater decreases in Wave V latencies from 6 weeks to 9 months had higher CCDI-P composite scores. Females had higher language development scores and shorter Wave V latencies at both ages than males. Interestingly, when the ABR Wave V latencies at both ages were taken into account, the direct effects of gender on language disappeared. In conclusion, these results support the importance of low-level auditory processing capabilities for early language acquisition in a population of typically developing young infants. Moreover, the auditory brainstem response in this paradigm shows promise as an electrophysiological marker to predict individual differences in language development in young children. © 2012 Blackwell Publishing Ltd.

  20. Sensing Super-position: Visual Instrument Sensor Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2006-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.

  1. A Study of the Role of Central Auditory Processing in Learning Disabilities: A Prospectus Submitted to the Department of Speech.

    ERIC Educational Resources Information Center

    Murray, Hugh

    Proposed is a study to evaluate the auditory systems of learning disabled (LD) students with a new audiological, diagnostic, stimulus apparatus which is capable of objectively measuring the interaction of the binaural aspects of hearing. The author points out problems with LD definitions that exclude neurological disorders. The detection of…

  2. Intertrial auditory neural stability supports beat synchronization in preschoolers

    PubMed Central

    Carr, Kali Woodruff; Tierney, Adam; White-Schwoch, Travis; Kraus, Nina

    2016-01-01

    The ability to synchronize motor movements along with an auditory beat places stringent demands on the temporal processing and sensorimotor integration capabilities of the nervous system. Links between millisecond-level precision of auditory processing and the consistency of sensorimotor beat synchronization implicate fine auditory neural timing as a mechanism for forming stable internal representations of, and behavioral reactions to, sound. Here, for the first time, we demonstrate a systematic relationship between consistency of beat synchronization and trial-by-trial stability of subcortical speech processing in preschoolers (ages 3 and 4 years old). We conclude that beat synchronization might provide a useful window into millisecond-level neural precision for encoding sound in early childhood, when speech processing is especially important for language acquisition and development. PMID:26760457

  3. Representation of complex vocalizations in the Lusitanian toadfish auditory system: evidence of fine temporal, frequency and amplitude discrimination

    PubMed Central

    Vasconcelos, Raquel O.; Fonseca, Paulo J.; Amorim, M. Clara P.; Ladich, Friedrich

    2011-01-01

    Many fishes rely on their auditory skills to interpret crucial information about predators and prey, and to communicate intraspecifically. Few studies, however, have examined how complex natural sounds are perceived in fishes. We investigated the representation of conspecific mating and agonistic calls in the auditory system of the Lusitanian toadfish Halobatrachus didactylus, and analysed auditory responses to heterospecific signals from ecologically relevant species: a sympatric vocal fish (meagre Argyrosomus regius) and a potential predator (dolphin Tursiops truncatus). Using auditory evoked potential (AEP) recordings, we showed that both sexes can resolve fine features of conspecific calls. The toadfish auditory system was most sensitive to frequencies well represented in the conspecific vocalizations (namely the mating boatwhistle), and revealed a fine representation of duration and pulsed structure of agonistic and mating calls. Stimuli and corresponding AEP amplitudes were highly correlated, indicating an accurate encoding of amplitude modulation. Moreover, Lusitanian toadfish were able to detect T. truncatus foraging sounds and A. regius calls, although at higher amplitudes. We provide strong evidence that the auditory system of a vocal fish, lacking accessory hearing structures, is capable of resolving fine features of complex vocalizations that are probably important for intraspecific communication and other relevant stimuli from the auditory scene. PMID:20861044

  4. Auditory-Perceptual Learning Improves Speech Motor Adaptation in Children

    PubMed Central

    Shiller, Douglas M.; Rochon, Marie-Lyne

    2015-01-01

    Auditory feedback plays an important role in children’s speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback, however it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5–7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children’s ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation. PMID:24842067

  5. Dolphin sonar detection and discrimination capabilities

    NASA Astrophysics Data System (ADS)

    Au, Whitlow W. L.

    2004-05-01

    Dolphins have a very sophisticated short range sonar that surpasses all technological sonar in its capabilities to perform complex target discrimination and recognition tasks. The system that the U.S. Navy has for detecting mines buried under ocean sediment is one that uses Atlantic bottlenose dolphins. However, close examination of the dolphin sonar system will reveal that the dolphin acoustic hardware is fairly ordinary and not very special. The transmitted signals have peak-to-peak amplitudes as high as 225-228 dB re 1 μPa which translates to an rms value of approximately 210-213 dB. The transmit beamwidth is fairly broad at about 10o in both the horizontal and vertical planes and the receiving beamwidth is slightly broader by several degrees. The auditory filters are not very narrow with Q values of about 8.4. Despite these fairly ordinary features of the acoustic system, these animals still demonstrate very unusual and astonishing capabilities. Some of the capabilities of the dolphin sonar system will be presented and the reasons for their keen sonar capabilities will be discussed. Important features of their sonar include the broadband clicklike signals used, adaptive sonar search capabilities and large dynamic range of its auditory system.

  6. Preattentive extraction of abstract feature conjunctions from auditory stimulation as reflected by the mismatch negativity (MMN).

    PubMed

    Paavilainen, P; Simola, J; Jaramillo, M; Näätänen, R; Winkler, I

    2001-03-01

    Brain mechanisms extracting invariant information from varying auditory inputs were studied using the mismatch-negativity (MMN) brain response. We wished to determine whether the preattentive sound-analysis mechanisms, reflected by MMN, are capable of extracting invariant relationships based on abstract conjunctions between two sound features. The standard stimuli varied over a large range in frequency and intensity dimensions following the rule that the higher the frequency, the louder the intensity. The occasional deviant stimuli violated this frequency-intensity relationship and elicited an MMN. The results demonstrate that preattentive processing of auditory stimuli extends to unexpectedly complex relationships between the stimulus features.

  7. Intensity-invariant coding in the auditory system.

    PubMed

    Barbour, Dennis L

    2011-11-01

    The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. The effect of early visual deprivation on the neural bases of multisensory processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2015-06-01

    Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    PubMed

    Stone, Scott A; Tata, Matthew S

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  10. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality

    PubMed Central

    Tata, Matthew S.

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518

  11. Binaural speech processing in individuals with auditory neuropathy.

    PubMed

    Rance, G; Ryan, M M; Carew, P; Corben, L A; Yiu, E; Tan, J; Delatycki, M B

    2012-12-13

    Auditory neuropathy disrupts the neural representation of sound and may therefore impair processes contingent upon inter-aural integration. The aims of this study were to investigate binaural auditory processing in individuals with axonal (Friedreich ataxia) and demyelinating (Charcot-Marie-Tooth disease type 1A) auditory neuropathy and to evaluate the relationship between the degree of auditory deficit and overall clinical severity in patients with neuropathic disorders. Twenty-three subjects with genetically confirmed Friedreich ataxia and 12 subjects with Charcot-Marie-Tooth disease type 1A underwent psychophysical evaluation of basic auditory processing (intensity discrimination/temporal resolution) and binaural speech perception assessment using the Listening in Spatialized Noise test. Age, gender and hearing-level-matched controls were also tested. Speech perception in noise for individuals with auditory neuropathy was abnormal for each listening condition, but was particularly affected in circumstances where binaural processing might have improved perception through spatial segregation. Ability to use spatial cues was correlated with temporal resolution suggesting that the binaural-processing deficit was the result of disordered representation of timing cues in the left and right auditory nerves. Spatial processing was also related to overall disease severity (as measured by the Friedreich Ataxia Rating Scale and Charcot-Marie-Tooth Neuropathy Score) suggesting that the degree of neural dysfunction in the auditory system accurately reflects generalized neuropathic changes. Measures of binaural speech processing show promise for application in the neurology clinic. In individuals with auditory neuropathy due to both axonal and demyelinating mechanisms the assessment provides a measure of functional hearing ability, a biomarker capable of tracking the natural history of progressive disease and a potential means of evaluating the effectiveness of interventions. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Effects of Transcranial Direct Current Stimulation on Expression of Immediate Early Genes (IEG’s)

    DTIC Science & Technology

    2015-12-01

    enhancing cognitive capabilities in human subjects1, 2, and 3. Studies have also shown tDCS can produce positive outcomes in treating depression ...translated into DNA, they can re-enter the nucleus and cause the induction of novel gene transcription (Figure 1). As stated earlier, there has been...in striatum due to caffeine intake26, and activation in auditory cortex due to auditory cues27. cFos is able to auto- regulate itself, by a negative

  13. Adipose-derived stromal cells enhance auditory neuron survival in an animal model of sensory hearing loss.

    PubMed

    Schendzielorz, Philipp; Vollmer, Maike; Rak, Kristen; Wiegner, Armin; Nada, Nashwa; Radeloff, Katrin; Hagen, Rudolf; Radeloff, Andreas

    2017-10-01

    A cochlear implant (CI) is an electronic prosthesis that can partially restore speech perception capabilities. Optimum information transfer from the cochlea to the central auditory system requires a proper functioning auditory nerve (AN) that is electrically stimulated by the device. In deafness, the lack of neurotrophic support, normally provided by the sensory cells of the inner ear, however, leads to gradual degeneration of auditory neurons with undesirable consequences for CI performance. We evaluated the potential of adipose-derived stromal cells (ASCs) that are known to produce neurotrophic factors to prevent neural degeneration in sensory hearing loss. For this, co-cultures of ASCs with auditory neurons have been studied, and autologous ASC transplantation has been performed in a guinea pig model of gentamicin-induced sensory hearing loss. In vitro ASCs were neuroprotective and considerably increased the neuritogenesis of auditory neurons. In vivo transplantation of ASCs into the scala tympani resulted in an enhanced survival of auditory neurons. Specifically, peripheral AN processes that are assumed to be the optimal activation site for CI stimulation and that are particularly vulnerable to hair cell loss showed a significantly higher survival rate in ASC-treated ears. ASC transplantation into the inner ear may restore neurotrophic support in sensory hearing loss and may help to improve CI performance by enhanced AN survival. Copyright © 2017 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  14. Coordinated Noninvasive Studies (CNS) Project

    NASA Astrophysics Data System (ADS)

    Lauter, Judith

    1988-11-01

    Research activities during this period include: data collection related to the interface between complex-sound production and perception, specifically, studies on speech acoustics including two experiments on voice-onset-time variability in productions by speakers of several languages, and a series on acoustical characteristics of emotional expression; data collection regarding individual differences in the effect of stimulus characteristic on relative ear advantages; continuing data analysis and new collections documenting individual differences in auditory evoked potentials, with details related to auditory-systems asymmetries preliminary tests regarding the match between behavioral measures of relative ear advantages and quantitative-electroencephalographic asymmetries observed during auditory stimulation; pilot testing using a combination of Nuclear Magnetic Resonance's (NMR) anatomical-imaging and chemical-spectral-analysis capabilities to study physiological activation in the human brain.

  15. Alpha Rhythms in Audition: Cognitive and Clinical Perspectives

    PubMed Central

    Weisz, Nathan; Hartmann, Thomas; Müller, Nadia; Lorenz, Isabel; Obleser, Jonas

    2011-01-01

    Like the visual and the sensorimotor systems, the auditory system exhibits pronounced alpha-like resting oscillatory activity. Due to the relatively small spatial extent of auditory cortical areas, this rhythmic activity is less obvious and frequently masked by non-auditory alpha-generators when recording non-invasively using magnetoencephalography (MEG) or electroencephalography (EEG). Following stimulation with sounds, marked desynchronizations can be observed between 6 and 12 Hz, which can be localized to the auditory cortex. However knowledge about the functional relevance of the auditory alpha rhythm has remained scarce so far. Results from the visual and sensorimotor system have fuelled the hypothesis of alpha activity reflecting a state of functional inhibition. The current article pursues several intentions: (1) Firstly we review and present own evidence (MEG, EEG, sEEG) for the existence of an auditory alpha-like rhythm independent of visual or motor generators, something that is occasionally met with skepticism. (2) In a second part we will discuss tinnitus and how this audiological symptom may relate to reduced background alpha. The clinical part will give an introduction into a method which aims to modulate neurophysiological activity hypothesized to underlie this distressing disorder. Using neurofeedback, one is able to directly target relevant oscillatory activity. Preliminary data point to a high potential of this approach for treating tinnitus. (3) Finally, in a cognitive neuroscientific part we will show that auditory alpha is modulated by anticipation/expectations with and without auditory stimulation. We will also introduce ideas and initial evidence that alpha oscillations are involved in the most complex capability of the auditory system, namely speech perception. The evidence presented in this article corroborates findings from other modalities, indicating that alpha-like activity functionally has an universal inhibitory role across sensory modalities. PMID:21687444

  16. Effect of conductive hearing loss on central auditory function.

    PubMed

    Bayat, Arash; Farhadi, Mohammad; Emamdjomeh, Hesam; Saki, Nader; Mirmomeni, Golshan; Rahim, Fakher

    It has been demonstrated that long-term Conductive Hearing Loss (CHL) may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP). It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control), aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN) test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p=0.004; left: p<0.001). Individuals with CHL had significantly lower correct responses than individuals with normal hearing for both sides (p<0.001). No correlation was found between GIN performance and degree of hearing loss in either group (p>0.05). The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  17. A view of Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given.

  18. Review of weapon noise measurement and damage risk criteria: considerations for auditory protection and performance.

    PubMed

    Nakashima, Ann; Farinaccio, Rocco

    2015-04-01

    Noise-induced hearing loss resulting from weapon noise exposure has been studied for decades. A summary of recent work in weapon noise signal analysis, current knowledge of hearing damage risk criteria, and auditory performance in impulse noise is presented. Most of the currently used damage risk criteria are based on data that cannot be replicated or verified. There is a need to address the effects of combined noise exposures, from similar or different weapons and continuous background noise, in future noise exposure regulations. Advancements in hearing protection technology have expanded the options available to soldiers. Individual selection of hearing protection devices that are best suited to the type of exposure, the auditory task requirements, and hearing status of the user could help to facilitate their use. However, hearing protection devices affect auditory performance, which in turn affects situational awareness in the field. This includes communication capability and the localization and identification of threats. Laboratory training using high-fidelity weapon noise recordings has the potential to improve the auditory performance of soldiers in the field, providing a low-cost tool to enhance readiness for combat. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.

  19. Bottom-up driven involuntary auditory evoked field change: constant sound sequencing amplifies but does not sharpen neural activity.

    PubMed

    Okamoto, Hidehiko; Stracke, Henning; Lagemann, Lothar; Pantev, Christo

    2010-01-01

    The capability of involuntarily tracking certain sound signals during the simultaneous presence of noise is essential in human daily life. Previous studies have demonstrated that top-down auditory focused attention can enhance excitatory and inhibitory neural activity, resulting in sharpening of frequency tuning of auditory neurons. In the present study, we investigated bottom-up driven involuntary neural processing of sound signals in noisy environments by means of magnetoencephalography. We contrasted two sound signal sequencing conditions: "constant sequencing" versus "random sequencing." Based on a pool of 16 different frequencies, either identical (constant sequencing) or pseudorandomly chosen (random sequencing) test frequencies were presented blockwise together with band-eliminated noises to nonattending subjects. The results demonstrated that the auditory evoked fields elicited in the constant sequencing condition were significantly enhanced compared with the random sequencing condition. However, the enhancement was not significantly different between different band-eliminated noise conditions. Thus the present study confirms that by constant sound signal sequencing under nonattentive listening the neural activity in human auditory cortex can be enhanced, but not sharpened. Our results indicate that bottom-up driven involuntary neural processing may mainly amplify excitatory neural networks, but may not effectively enhance inhibitory neural circuits.

  20. Evidence for enhanced discrimination of virtual auditory distance among blind listeners using level and direct-to-reverberant cues.

    PubMed

    Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina

    2013-02-01

    Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.

  1. Gene therapy in the inner ear using adenovirus vectors.

    PubMed

    Husseman, Jacob; Raphael, Yehoash

    2009-01-01

    Therapies for the protection and regeneration of auditory hair cells are of great interest given the significant monetary and lifestyle impact of hearing loss. The past decade has seen tremendous advances in the use of adenoviral vectors to achieve these aims. Preliminary data demonstrated the functional capacity of this technique as adenoviral-induced expression of neurotrophic and growth factors protected hair cells and spiral ganglion neurons from ototoxic insults. Subsequent efforts confirmed the feasibility of adenoviral transfection of cells in the auditory neuroepithelium via cochleostomy into the scala media. Most recently, efforts have focused on regeneration of depleted hair cells. Mammalian hearing loss is generally considered a permanent insult as the auditory epithelium lacks a basal layer capable of producing new hair cells. Recently, the transcription factor Atoh1 has been found to play a critical role in hair cell differentiation. Adenoviral-mediated overexpression of Atoh1 in culture and in vivo have shown the ability to regenerate auditory and vestibular hair cells by causing transdifferentiation of neighboring epithelial-supporting cells. Functional recovery of both the auditory and vestibular systems has been documented following adenoviral induced Atoh1 overexpression. Copyright (c) 2009 S. Karger AG, Basel.

  2. Understanding response proclivity and the limits of sensory capability: What do we hear and what can we hear?

    NASA Astrophysics Data System (ADS)

    Leek, Marjorie R.; Neff, Donna L.

    2004-05-01

    Charles Watson's studies of informational masking and the effects of stimulus uncertainty on auditory perception have had a profound impact on auditory research. His series of seminal studies in the mid-1970s on the detection and discrimination of target sounds in sequences of brief tones with uncertain properties addresses the fundamental problem of extracting target signals from background sounds. As conceptualized by Chuck and others, informational masking results from more central (even ``cogneetive'') processes as a consequence of stimulus uncertainty, and can be distinguished from ``energetic'' masking, which primarily arises from the auditory periphery. Informational masking techniques are now in common use to study the detection, discrimination, and recognition of complex sounds, the capacity of auditory memory and aspects of auditory selective attention, the often large effects of training to reduce detrimental effects of uncertainty, and the perceptual segregation of target sounds from irrelevant context sounds. This paper will present an overview of past and current research on informational masking, and show how Chuck's work has been expanded in several directions by other scientists to include the effects of informational masking on speech perception and on perception by listeners with hearing impairment. [Work supported by NIDCD.

  3. Comparative assessment of amphibious hearing in pinnipeds.

    PubMed

    Reichmuth, Colleen; Holt, Marla M; Mulsow, Jason; Sills, Jillian M; Southall, Brandon L

    2013-06-01

    Auditory sensitivity in pinnipeds is influenced by the need to balance efficient sound detection in two vastly different physical environments. Previous comparisons between aerial and underwater hearing capabilities have considered media-dependent differences relative to auditory anatomy, acoustic communication, ecology, and amphibious life history. New data for several species, including recently published audiograms and previously unreported measurements obtained in quiet conditions, necessitate a re-evaluation of amphibious hearing in pinnipeds. Several findings related to underwater hearing are consistent with earlier assessments, including an expanded frequency range of best hearing in true seals that spans at least six octaves. The most notable new results indicate markedly better aerial sensitivity in two seals (Phoca vitulina and Mirounga angustirostris) and one sea lion (Zalophus californianus), likely attributable to improved ambient noise control in test enclosures. An updated comparative analysis alters conventional views and demonstrates that these amphibious pinnipeds have not necessarily sacrificed aerial hearing capabilities in favor of enhanced underwater sound reception. Despite possessing underwater hearing that is nearly as sensitive as fully aquatic cetaceans and sirenians, many seals and sea lions have retained acute aerial hearing capabilities rivaling those of terrestrial carnivores.

  4. Auditory opportunity and visual constraint enabled the evolution of echolocation in bats.

    PubMed

    Thiagavel, Jeneni; Cechetto, Clément; Santana, Sharlene E; Jakobsen, Lasse; Warrant, Eric J; Ratcliffe, John M

    2018-01-08

    Substantial evidence now supports the hypothesis that the common ancestor of bats was nocturnal and capable of both powered flight and laryngeal echolocation. This scenario entails a parallel sensory and biomechanical transition from a nonvolant, vision-reliant mammal to one capable of sonar and flight. Here we consider anatomical constraints and opportunities that led to a sonar rather than vision-based solution. We show that bats' common ancestor had eyes too small to allow for successful aerial hawking of flying insects at night, but an auditory brain design sufficient to afford echolocation. Further, we find that among extant predatory bats (all of which use laryngeal echolocation), those with putatively less sophisticated biosonar have relatively larger eyes than do more sophisticated echolocators. We contend that signs of ancient trade-offs between vision and echolocation persist today, and that non-echolocating, phytophagous pteropodid bats may retain some of the necessary foundations for biosonar.

  5. Investigating brain response to music: a comparison of different fMRI acquisition schemes.

    PubMed

    Mueller, Karsten; Mildner, Toralf; Fritz, Thomas; Lepsien, Jöran; Schwarzbauer, Christian; Schroeter, Matthias L; Möller, Harald E

    2011-01-01

    Functional magnetic resonance imaging (fMRI) in auditory experiments is a challenge, because the scanning procedure produces considerable noise that can interfere with the auditory paradigm. The noise might either mask the auditory material presented, or interfere with stimuli designed to evoke emotions because it sounds loud and rather unpleasant. Therefore, scanning paradigms that allow interleaved auditory stimulation and image acquisition appear to be advantageous. The sparse temporal sampling (STS) technique uses a very long repetition time in order to achieve a stimulus presentation in the absence of scanner noise. Although only relatively few volumes are acquired for the resulting data sets, there have been recent studies where this method has furthered remarkable results. A new development is the interleaved silent steady state (ISSS) technique. Compared with STS, this method is capable of acquiring several volumes in the time frame between the auditory trials (while the magnetization is kept in a steady state during stimulus presentation). In order to draw conclusions about the optimum fMRI procedure with auditory stimulation, different echo-planar imaging (EPI) acquisition schemes were compared: Continuous scanning, STS, and ISSS. The total acquisition time of each sequence was adjusted to about 12.5 min. The results indicate that the ISSS approach exhibits the highest sensitivity in detecting subtle activity in sub-cortical brain regions. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Characterization of mannose binding lectin from channel catfish Ictalurus punctatus

    USDA-ARS?s Scientific Manuscript database

    Mannose-binding lectin (MBL) is an important component of innate immunity capable of activating the lectin pathway of the complement system. A MBL gene was isolated from channel catfish (Ictalurus punctatus). The deduced protein contains a canonical collagen-like domain, a carbohydrate recognition d...

  7. Audibility of reverse alarms under hearing protectors for normal and hearing-impaired listeners.

    PubMed

    Robinson, G S; Casali, J G

    1995-11-01

    The question of whether or not an individual suffering from a hearing loss is capable of hearing an auditory alarm or warning is an extremely important industrial safety issue. The ISO Standard that addresses auditory warnings for workplaces requires that any auditory alarm or warning be audible to all individuals in the workplace including those suffering from a hearing loss and/or wearing hearing protection devices (HPDs). Research was undertaken to determine how the ability to detect an alarm or warning signal changed for individuals with normal hearing and two levels of hearing loss as the levels of masking noise and alarm were manipulated. Pink noise was used as the masker and a heavy-equipment reverse alarm was used as the signal. The rating method paradigm of signal detection theory was used as the experimental procedure to separate the subjects' absolute sensitivities to the alarm from their individual criteria for deciding to respond in an affirmative manner. Results indicated that even at a fairly low signal-to-noise ratio (0 dB), subjects with a substantial hearing loss [a pure-tone average (PTA) hearing level of 45-50 dBHL in both ears] were capable of hearing the reverse alarm while wearing a high-attenuation earmuff in the pink noise used in the study.

  8. The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users.

    PubMed

    Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn

    2017-01-01

    This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants' auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research.

  9. Genetics and physiology of the nuclearly inherited yellow foliar mutants in soybean

    USDA-ARS?s Scientific Manuscript database

    Plant photosynthetic pigments are important in harvesting the light energy and transfer of energy during photosynthesis. There are several yellow foliar mutants discovered in soybean and chromosomal locations for about half of them have been deduced. Viable-yellow mutants are capable of surviving wi...

  10. Analysis of the influence of memory content of auditory stimuli on the memory content of EEG signal

    PubMed Central

    Namazi, Hamidreza; Kulish, Vladimir V.

    2016-01-01

    One of the major challenges in brain research is to relate the structural features of the auditory stimulus to structural features of Electroencephalogram (EEG) signal. Memory content is an important feature of EEG signal and accordingly the brain. On the other hand, the memory content can also be considered in case of stimulus. Beside all works done on analysis of the effect of stimuli on human EEG and brain memory, no work discussed about the stimulus memory and also the relationship that may exist between the memory content of stimulus and the memory content of EEG signal. For this purpose we consider the Hurst exponent as the measure of memory. This study reveals the plasticity of human EEG signals in relation to the auditory stimuli. For the first time we demonstrated that the memory content of an EEG signal shifts towards the memory content of the auditory stimulus used. The results of this analysis showed that an auditory stimulus with higher memory content causes a larger increment in the memory content of an EEG signal. For the verification of this result, we benefit from approximate entropy as indicator of time series randomness. The capability, observed in this research, can be further investigated in relation to human memory. PMID:27528219

  11. Analysis of the influence of memory content of auditory stimuli on the memory content of EEG signal.

    PubMed

    Namazi, Hamidreza; Khosrowabadi, Reza; Hussaini, Jamal; Habibi, Shaghayegh; Farid, Ali Akhavan; Kulish, Vladimir V

    2016-08-30

    One of the major challenges in brain research is to relate the structural features of the auditory stimulus to structural features of Electroencephalogram (EEG) signal. Memory content is an important feature of EEG signal and accordingly the brain. On the other hand, the memory content can also be considered in case of stimulus. Beside all works done on analysis of the effect of stimuli on human EEG and brain memory, no work discussed about the stimulus memory and also the relationship that may exist between the memory content of stimulus and the memory content of EEG signal. For this purpose we consider the Hurst exponent as the measure of memory. This study reveals the plasticity of human EEG signals in relation to the auditory stimuli. For the first time we demonstrated that the memory content of an EEG signal shifts towards the memory content of the auditory stimulus used. The results of this analysis showed that an auditory stimulus with higher memory content causes a larger increment in the memory content of an EEG signal. For the verification of this result, we benefit from approximate entropy as indicator of time series randomness. The capability, observed in this research, can be further investigated in relation to human memory.

  12. Proceedings of the 1977 Image Conference Held at Williams Air Force Base, Arizona on 17-18 May 1977

    DTIC Science & Technology

    1977-05-01

    both stimulating and informative. May your experiences at this Conference he most rewarding and enjoyable. rr KEYNOTE ADDRESS Senator 3!rry M. Goldwater...capabilities, human motion perceiving capabilities and the dynamics of the flight system being simulated. Subliminal washout schemes and recent develop...facilities and the military user by the graphic presentation of information with auditory overlay. Background Pieces of the electronic graphic mail concept

  13. Synchronisation signatures in the listening brain: a perspective from non-invasive neuroelectrophysiology.

    PubMed

    Weisz, Nathan; Obleser, Jonas

    2014-01-01

    Human magneto- and electroencephalography (M/EEG) are capable of tracking brain activity at millisecond temporal resolution in an entirely non-invasive manner, a feature that offers unique opportunities to uncover the spatiotemporal dynamics of the hearing brain. In general, precise synchronisation of neural activity within as well as across distributed regions is likely to subserve any cognitive process, with auditory cognition being no exception. Brain oscillations, in a range of frequencies, are a putative hallmark of this synchronisation process. Embedded in a larger effort to relate human cognition to brain oscillations, a field of research is emerging on how synchronisation within, as well as between, brain regions may shape auditory cognition. Combined with much improved source localisation and connectivity techniques, it has become possible to study directly the neural activity of auditory cortex with unprecedented spatio-temporal fidelity and to uncover frequency-specific long-range connectivities across the human cerebral cortex. In the present review, we will summarise recent contributions mainly of our laboratories to this emerging domain. We present (1) a more general introduction on how to study local as well as interareal synchronisation in human M/EEG; (2) how these networks may subserve and influence illusory auditory perception (clinical and non-clinical) and (3) auditory selective attention; and (4) how oscillatory networks further reflect and impact on speech comprehension. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Interactive Television in Nursing Continuing Education

    ERIC Educational Resources Information Center

    Fry, Carlton F.; And Others

    1976-01-01

    The use of a telemedicine system (live, color microwave television transmission with two-way auditory and visual communication capability) to teach a course in critical care nursing from an urban university medical center to staff members in rural southeastern Ohio hospitals is described. (MS)

  15. The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users

    PubMed Central

    Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn

    2017-01-01

    This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants’ auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research. PMID:28348542

  16. Auditory Implant Research at the House Ear Institute 1989–2013

    PubMed Central

    Shannon, Robert V.

    2014-01-01

    The House Ear Institute (HEI) had a long and distinguished history of auditory implant innovation and development. Early clinical innovations include being one of the first cochlear implant (CI) centers, being the first center to implant a child with a cochlear implant in the US, developing the auditory brainstem implant, and developing multiple surgical approaches and tools for Otology. This paper reviews the second stage of auditory implant research at House – in-depth basic research on perceptual capabilities and signal processing for both cochlear implants and auditory brainstem implants. Psychophysical studies characterized the loudness and temporal perceptual properties of electrical stimulation as a function of electrical parameters. Speech studies with the noise-band vocoder showed that only four bands of tonotopically arrayed information were sufficient for speech recognition, and that most implant users were receiving the equivalent of 8–10 bands of information. The noise-band vocoder allowed us to evaluate the effects of the manipulation of the number of bands, the alignment of the bands with the original tonotopic map, and distortions in the tonotopic mapping, including holes in the neural representation. Stimulation pulse rate was shown to have only a small effect on speech recognition. Electric fields were manipulated in position and sharpness, showing the potential benefit of improved tonotopic selectivity. Auditory training shows great promise for improving speech recognition for all patients. And the Auditory Brainstem Implant was developed and improved and its application expanded to new populations. Overall, the last 25 years of research at HEI helped increase the basic scientific understanding of electrical stimulation of hearing and contributed to the improved outcomes for patients with the CI and ABI devices. PMID:25449009

  17. Top-down modulation of visual and auditory cortical processing in aging.

    PubMed

    Guerreiro, Maria J S; Eck, Judith; Moerel, Michelle; Evers, Elisabeth A T; Van Gerven, Pascal W M

    2015-02-01

    Age-related cognitive decline has been accounted for by an age-related deficit in top-down attentional modulation of sensory cortical processing. In light of recent behavioral findings showing that age-related differences in selective attention are modality dependent, our goal was to investigate the role of sensory modality in age-related differences in top-down modulation of sensory cortical processing. This question was addressed by testing younger and older individuals in several memory tasks while undergoing fMRI. Throughout these tasks, perceptual features were kept constant while attentional instructions were varied, allowing us to devise all combinations of relevant and irrelevant, visual and auditory information. We found no top-down modulation of auditory sensory cortical processing in either age group. In contrast, we found top-down modulation of visual cortical processing in both age groups, and this effect did not differ between age groups. That is, older adults enhanced cortical processing of relevant visual information and suppressed cortical processing of visual distractors during auditory attention to the same extent as younger adults. The present results indicate that older adults are capable of suppressing irrelevant visual information in the context of cross-modal auditory attention, and thereby challenge the view that age-related attentional and cognitive decline is due to a general deficits in the ability to suppress irrelevant information. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Comparison between chloral hydrate and propofol-ketamine as sedation regimens for pediatric auditory brainstem response testing.

    PubMed

    Abulebda, Kamal; Patel, Vinit J; Ahmed, Sheikh S; Tori, Alvaro J; Lutfi, Riad; Abu-Sultaneh, Samer

    2017-10-28

    The use of diagnostic auditory brainstem response testing under sedation is currently the "gold standard" in infants and young children who are not developmentally capable of completing the test. The aim of the study is to compare a propofol-ketamine regimen to an oral chloral hydrate regimen for sedating children undergoing auditory brainstem response testing. Patients between 4 months and 6 years who required sedation for auditory brainstem response testing were included in this retrospective study. Drugs doses, adverse effects, sedation times, and the effectiveness of the sedative regimens were reviewed. 73 patients underwent oral chloral hydrate sedation, while 117 received propofol-ketamine sedation. 12% of the patients in the chloral hydrate group failed to achieve desired sedation level. The average procedure, recovery and total nursing times were significantly lower in the propofol-ketamine group. Propofol-ketamine group experienced higher incidence of transient hypoxemia. Both sedation regimens can be successfully used for sedating children undergoing auditory brainstem response testing. While deep sedation using propofol-ketamine regimen offers more efficiency than moderate sedation using chloral hydrate, it does carry a higher incidence of transient hypoxemia, which warrants the use of a highly skilled team trained in pediatric cardio-respiratory monitoring and airway management. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  19. A Hypothesis: Life Initiated from Two Genes, as Deduced from the RNA World Hypothesis and the Characteristics of Life-Like Systems

    PubMed Central

    Kawamura, Kunio

    2016-01-01

    RNA played a central role in the emergence of the first life-like system on primitive Earth since RNA molecules contain both genetic information and catalytic activity. However, there are several drawbacks regarding the RNA world hypothesis. Here, I briefly discuss the feasibility of the RNA world hypothesis to deduce the RNA functions that are essential for forming a life-like system. At the same time, I have conducted a conceptual analysis of the characteristics of biosystems as a useful approach to deduce a realistic life-like system in relation to the definition of life. For instance, an RNA-based life-like system should possess enough stability to resist environmental perturbations, by developing a cell-like compartment, for instance. Here, a conceptual viewpoint is summarized to provide a realistic life-like system that is compatible with the primitive Earth environment and the capabilities of RNA molecules. According to the empirical and conceptual analysis, I propose the hypothesis that the first life-like system could have initiated from only two genes. PMID:27490571

  20. The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP) hypothesis

    PubMed Central

    Patel, Aniruddh D.; Iversen, John R.

    2013-01-01

    Every human culture has some form of music with a beat: a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This “action simulation for auditory prediction” (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi. PMID:24860439

  1. Speech Restoration: An Interactive Process

    ERIC Educational Resources Information Center

    Grataloup, Claire; Hoen, Michael; Veuillet, Evelyne; Collet, Lionel; Pellegrino, Francois; Meunier, Fanny

    2009-01-01

    Purpose: This study investigates the ability to understand degraded speech signals and explores the correlation between this capacity and the functional characteristics of the peripheral auditory system. Method: The authors evaluated the capability of 50 normal-hearing native French speakers to restore time-reversed speech. The task required them…

  2. Individualization of music-based rhythmic auditory cueing in Parkinson's disease.

    PubMed

    Bella, Simone Dalla; Dotov, Dobromir; Bardy, Benoît; de Cock, Valérie Cochen

    2018-06-04

    Gait dysfunctions in Parkinson's disease can be partly relieved by rhythmic auditory cueing. This consists in asking patients to walk with a rhythmic auditory stimulus such as a metronome or music. The effect on gait is visible immediately in terms of increased speed and stride length. Moreover, training programs based on rhythmic cueing can have long-term benefits. The effect of rhythmic cueing, however, varies from one patient to the other. Patients' response to the stimulation may depend on rhythmic abilities, often deteriorating with the disease. Relatively spared abilities to track the beat favor a positive response to rhythmic cueing. On the other hand, most patients with poor rhythmic abilities either do not respond to the cues or experience gait worsening when walking with cues. An individualized approach to rhythmic auditory cueing with music is proposed to cope with this variability in patients' response. This approach calls for using assistive mobile technologies capable of delivering cues that adapt in real time to patients' gait kinematics, thus affording step synchronization to the beat. Individualized rhythmic cueing can provide a safe and cost-effective alternative to standard cueing that patients may want to use in their everyday lives. © 2018 New York Academy of Sciences.

  3. Ira Hirsh and oral deaf education: The role of audition in language development

    NASA Astrophysics Data System (ADS)

    Geers, Ann

    2002-05-01

    Prior to the 1960s, the teaching of speech to deaf children consisted primarily of instruction in lip reading and tactile perception accompanied by imitative exercises in speech sound production. Hirsh came to Central Institute for the Deaf with an interest in discovering the auditory capabilities of normal-hearing listeners. This interest led him to speculate that more normal speech development could be encouraged in deaf children by maximizing use of their limited residual hearing. Following the tradition of Max Goldstein, Edith Whetnall, and Dennis Fry, Hirsh gave scientific validity to the use of amplified speech as the primary avenue to oral language development in prelingually deaf children. This ``auditory approach,'' combined with an emphasis on early intervention, formed the basis for auditory-oral education as we know it today. This presentation will examine how the speech perception, language, and reading skills of prelingually deaf children have changed as a result of improvements in auditory technology that have occurred over the past 30 years. Current data from children using cochlear implants will be compared with data collected earlier from children with profound hearing loss who used hearing aids. [Work supported by NIH.

  4. Emergence of Spatial Stream Segregation in the Ascending Auditory Pathway.

    PubMed

    Yao, Justin D; Bremen, Peter; Middlebrooks, John C

    2015-12-09

    Stream segregation enables a listener to disentangle multiple competing sequences of sounds. A recent study from our laboratory demonstrated that cortical neurons in anesthetized cats exhibit spatial stream segregation (SSS) by synchronizing preferentially to one of two sequences of noise bursts that alternate between two source locations. Here, we examine the emergence of SSS along the ascending auditory pathway. Extracellular recordings were made in anesthetized rats from the inferior colliculus (IC), the nucleus of the brachium of the IC (BIN), the medial geniculate body (MGB), and the primary auditory cortex (A1). Stimuli consisted of interleaved sequences of broadband noise bursts that alternated between two source locations. At stimulus presentation rates of 5 and 10 bursts per second, at which human listeners report robust SSS, neural SSS is weak in the central nucleus of the IC (ICC), it appears in the nucleus of the brachium of the IC (BIN) and in approximately two-thirds of neurons in the ventral MGB (MGBv), and is prominent throughout A1. The enhancement of SSS at the cortical level reflects both increased spatial sensitivity and increased forward suppression. We demonstrate that forward suppression in A1 does not result from synaptic inhibition at the cortical level. Instead, forward suppression might reflect synaptic depression in the thalamocortical projection. Together, our findings indicate that auditory streams are increasingly segregated along the ascending auditory pathway as distinct mutually synchronized neural populations. Listeners are capable of disentangling multiple competing sequences of sounds that originate from distinct sources. This stream segregation is aided by differences in spatial location between the sources. A possible substrate of spatial stream segregation (SSS) has been described in the auditory cortex, but the mechanisms leading to those cortical responses are unknown. Here, we investigated SSS in three levels of the ascending auditory pathway with extracellular unit recordings in anesthetized rats. We found that neural SSS emerges within the ascending auditory pathway as a consequence of sharpening of spatial sensitivity and increasing forward suppression. Our results highlight brainstem mechanisms that culminate in SSS at the level of the auditory cortex. Copyright © 2015 Yao et al.

  5. Improving Memory Span in Children with Down Syndrome

    ERIC Educational Resources Information Center

    Conners, F. A.; Rosenquist, C. J.; Arnett, L.; Moore, M. S.; Hume, L. E.

    2008-01-01

    Background: Down syndrome (DS) is characterized by impaired memory span, particularly auditory verbal memory span. Memory span is linked developmentally to several language capabilities, and may be a basic capacity that enables language learning. If children with DS had better memory span, they might benefit more from language intervention. The…

  6. Inferential reasoning by exclusion in great apes, lesser apes, and spider monkeys.

    PubMed

    Hill, Andrew; Collier-Baker, Emma; Suddendorf, Thomas

    2011-02-01

    Using the cups task, in which subjects are presented with limited visual or auditory information that can be used to deduce the location of a hidden reward, Call (2004) found prima facie evidence of inferential reasoning by exclusion in several great ape species. One bonobo (Pan paniscus) and two gorillas (Gorilla gorilla) appeared to make such inferences in both the visual and auditory domains. However, common chimpanzees (Pan troglodytes) were successful only in the visual domain, and Bornean orangutans (Pongo pygmaeus) in neither. The present research built on this paradigm, and Experiment 1 yielded prima facie evidence of inference by exclusion in both domains for two common chimpanzees, and in the visual domain for two Sumatran orangutans (Pongo abelii). Experiments 2 and 3 demonstrated that two specific associative learning explanations could not readily account for these results. Because an important focus of the program of research was to assess the cognitive capacities of lesser apes (family Hylobatidae), we modified Call's original procedures to better suit their attentional and dispositional characteristics. In Experiment 1, testing was also attempted with three gibbon genera (Symphalangus, Nomascus, Hylobates), but none of the subjects completed the standard task. Further testing of three siamangs (Symphalangus syndactylus) and a spider monkey (Ateles geoffroyi) using a faster method yielded prima facie evidence of inferential reasoning by exclusion in the visual domain among the siamangs (Experiment 4).

  7. Hair cell regeneration in the avian auditory epithelium.

    PubMed

    Stone, Jennifer S; Cotanche, Douglas A

    2007-01-01

    Regeneration of sensory hair cells in the mature avian inner ear was first described just over 20 years ago. Since then, it has been shown that many other non-mammalian species either continually produce new hair cells or regenerate them in response to trauma. However, mammals exhibit limited hair cell regeneration, particularly in the auditory epithelium. In birds and other non-mammals, regenerated hair cells arise from adjacent non-sensory (supporting) cells. Hair cell regeneration was initially described as a proliferative response whereby supporting cells re-enter the mitotic cycle, forming daughter cells that differentiate into either hair cells or supporting cells and thereby restore cytoarchitecture and function in the sensory epithelium. However, further analyses of the avian auditory epithelium (and amphibian vestibular epithelium) revealed a second regenerative mechanism, direct transdifferentiation, during which supporting cells change their gene expression and convert into hair cells without dividing. In the chicken auditory epithelium, these two distinct mechanisms show unique spatial and temporal patterns, suggesting they are differentially regulated. Current efforts are aimed at identifying signals that maintain supporting cells in a quiescent state or direct them to undergo direct transdifferentiation or cell division. Here, we review current knowledge about supporting cell properties and discuss candidate signaling molecules for regulating supporting cell behavior, in quiescence and after damage. While significant advances have been made in understanding regeneration in non-mammals over the last 20 years, we have yet to determine why the mammalian auditory epithelium lacks the ability to regenerate hair cells spontaneously and whether it is even capable of significant regeneration under additional circumstances. The continued study of mechanisms controlling regeneration in the avian auditory epithelium may lead to strategies for inducing significant and functional regeneration in mammals.

  8. The role of auditory transient and deviance processing in distraction of task performance: a combined behavioral and event-related brain potential study

    PubMed Central

    Berti, Stefan

    2013-01-01

    Distraction of goal-oriented performance by a sudden change in the auditory environment is an everyday life experience. Different types of changes can be distracting, including a sudden onset of a transient sound and a slight deviation of otherwise regular auditory background stimulation. With regard to deviance detection, it is assumed that slight changes in a continuous sequence of auditory stimuli are detected by a predictive coding mechanisms and it has been demonstrated that this mechanism is capable of distracting ongoing task performance. In contrast, it is open whether transient detection—which does not rely on predictive coding mechanisms—can trigger behavioral distraction, too. In the present study, the effect of rare auditory changes on visual task performance is tested in an auditory-visual cross-modal distraction paradigm. The rare changes are either embedded within a continuous standard stimulation (triggering deviance detection) or are presented within an otherwise silent situation (triggering transient detection). In the event-related brain potentials, deviants elicited the mismatch negativity (MMN) while transients elicited an enhanced N1 component, mirroring pre-attentive change detection in both conditions but on the basis of different neuro-cognitive processes. These sensory components are followed by attention related ERP components including the P3a and the reorienting negativity (RON). This demonstrates that both types of changes trigger switches of attention. Finally, distraction of task performance is observable, too, but the impact of deviants is higher compared to transients. These findings suggest different routes of distraction allowing for the automatic processing of a wide range of potentially relevant changes in the environment as a pre-requisite for adaptive behavior. PMID:23874278

  9. Auditory Midbrain Implant: Research and Development Towards a Second Clinical Trial

    PubMed Central

    Lim, Hubert H.; Lenarz, Thomas

    2015-01-01

    The cochlear implant is considered one of the most successful neural prostheses to date, which was made possible by visionaries who continued to develop the cochlear implant through multiple technological and clinical challenges. However, patients without a functional auditory nerve or implantable cochlea cannot benefit from a cochlear implant. The focus of the paper is to review the development and translation of a new type of central auditory prosthesis for this group of patients, which is known as the auditory midbrain implant (AMI) and is designed for electrical stimulation within the inferior colliculus. The rationale and results for the first AMI clinical study using a multi-site single-shank array will be presented initially. Although the AMI has achieved encouraging results in terms of safety and improvements in lip-reading capabilities and environmental awareness, it has not yet provided sufficient speech perception. Animal and human data will then be presented to show that a two-shank AMI array can potentially improve hearing performance by targeting specific neurons of the inferior colliculus. Modifications to the AMI array design, stimulation strategy, and surgical approach have been made that are expected to improve hearing performance in the patients implanted with a two-shank array in an upcoming clinical trial funded by the National Institutes of Health. Positive outcomes from this clinical trial will motivate new efforts and developments toward improving central auditory prostheses for those who cannot sufficiently benefit from cochlear implants. PMID:25613994

  10. Multisensory Integration Strategy for Modality-Specific Loss of Inhibition Control in Older Adults.

    PubMed

    Lee, Ahreum; Ryu, Hokyoung; Kim, Jae-Kwan; Jeong, Eunju

    2018-04-11

    Older adults are known to have lesser cognitive control capability and greater susceptibility to distraction than young adults. Previous studies have reported age-related problems in selective attention and inhibitory control, yielding mixed results depending on modality and context in which stimuli and tasks were presented. The purpose of the study was to empirically demonstrate a modality-specific loss of inhibitory control in processing audio-visual information with ageing. A group of 30 young adults (mean age = 25.23, Standar Desviation (SD) = 1.86) and 22 older adults (mean age = 55.91, SD = 4.92) performed the audio-visual contour identification task (AV-CIT). We compared performance of visual/auditory identification (Uni-V, Uni-A) with that of visual/auditory identification in the presence of distraction in counterpart modality (Multi-V, Multi-A). The findings showed a modality-specific effect on inhibitory control. Uni-V performance was significantly better than Multi-V, indicating that auditory distraction significantly hampered visual target identification. However, Multi-A performance was significantly enhanced compared to Uni-A, indicating that auditory target performance was significantly enhanced by visual distraction. Additional analysis showed an age-specific effect on enhancement between Uni-A and Multi-A depending on the level of visual inhibition. Together, our findings indicated that the loss of visual inhibitory control was beneficial for the auditory target identification presented in a multimodal context in older adults. A likely multisensory information processing strategy in the older adults was further discussed in relation to aged cognition.

  11. Exploration of Acoustic Features for Automatic Vowel Discrimination in Spontaneous Speech

    ERIC Educational Resources Information Center

    Tyson, Na'im R.

    2012-01-01

    In an attempt to understand what acoustic/auditory feature sets motivated transcribers towards certain labeling decisions, I built machine learning models that were capable of discriminating between canonical and non-canonical vowels excised from the Buckeye Corpus. Specifically, I wanted to model when the dictionary form and the transcribed-form…

  12. A comparison of auditory brainstem responses across diving bird species

    USGS Publications Warehouse

    Crowell, Sara E.; Berlin, Alicia; Carr, Catherine E.; Olsen, Glenn H.; Therrien, Ronald E.; Yannuzzi, Sally E.; Ketten, Darlene R.

    2015-01-01

    There is little biological data available for diving birds because many live in hard-to-study, remote habitats. Only one species of diving bird, the black-footed penguin (Spheniscus demersus), has been studied in respect to auditory capabilities (Wever et al., Proc Natl Acad Sci USA 63:676–680, 1969). We, therefore, measured in-air auditory threshold in ten species of diving birds, using the auditory brainstem response (ABR). The average audiogram obtained for each species followed the U-shape typical of birds and many other animals. All species tested shared a common region of the greatest sensitivity, from 1000 to 3000 Hz, although audiograms differed significantly across species. Thresholds of all duck species tested were more similar to each other than to the two non-duck species tested. The red-throated loon (Gavia stellata) and northern gannet (Morus bassanus) exhibited the highest thresholds while the lowest thresholds belonged to the duck species, specifically the lesser scaup (Aythya affinis) and ruddy duck (Oxyura jamaicensis). Vocalization parameters were also measured for each species, and showed that with the exception of the common eider (Somateria mollisima), the peak frequency, i.e., frequency at the greatest intensity, of all species' vocalizations measured here fell between 1000 and 3000 Hz, matching the bandwidth of the most sensitive hearing range.

  13. A comparison of auditory brainstem responses across diving bird species

    PubMed Central

    Crowell, Sara E.; Wells-Berlin, Alicia M.; Carr, Catherine E.; Olsen, Glenn H.; Therrien, Ronald E.; Yannuzzi, Sally E.; Ketten, Darlene R.

    2015-01-01

    There is little biological data available for diving birds because many live in hard-to-study, remote habitats. Only one species of diving bird, the black-footed penguin (Spheniscus demersus), has been studied in respect to auditory capabilities (Wever et al. 1969). We therefore measured in-air auditory threshold in ten species of diving birds, using the auditory brainstem response (ABR). The average audiogram obtained for each species followed the U-shape typical of birds and many other animals. All species tested shared a common region of greatest sensitivity, from 1000 to 3000 Hz, although audiograms differed significantly across species. Thresholds of all duck species tested were more similar to each other than to the two non-duck species tested. The red-throated loon (Gavia stellata) and northern gannet (Morus bassanus) exhibited the highest thresholds while the lowest thresholds belonged to the duck species, specifically the lesser scaup (Aythya affinis) and ruddy duck (Oxyura jamaicensis). Vocalization parameters were also measured for each species, and showed that with the exception of the common eider (Somateria mollisima), the peak frequency, i.e. frequency at the greatest intensity, of all species’ vocalizations measured here fell between 1000 and 3000 Hz, matching the bandwidth of the most sensitive hearing range. PMID:26156644

  14. Probing the independence of formant control using altered auditory feedback

    PubMed Central

    MacDonald, Ewen N.; Purcell, David W.; Munhall, Kevin G.

    2011-01-01

    Two auditory feedback perturbation experiments were conducted to examine the nature of control of the first two formants in vowels. In the first experiment, talkers heard their auditory feedback with either F1 or F2 shifted in frequency. Talkers altered production of the perturbed formant by changing its frequency in the opposite direction to the perturbation but did not produce a correlated alteration of the unperturbed formant. Thus, the motor control system is capable of fine-grained independent control of F1 and F2. In the second experiment, a large meta-analysis was conducted on data from talkers who received feedback where both F1 and F2 had been perturbed. A moderate correlation was found between individual compensations in F1 and F2 suggesting that the control of F1 and F2 is processed in a common manner at some level. While a wide range of individual compensation magnitudes were observed, no significant correlations were found between individuals’ compensations and vowel space differences. Similarly, no significant correlations were found between individuals’ compensations and variability in normal vowel production. Further, when receiving normal auditory feedback, most of the population exhibited no significant correlation between the natural variation in production of F1 and F2. PMID:21361452

  15. Neuromonitoring of cochlea and auditory nerve with multiple extracted parameters during induced hypoxia and nerve manipulation

    NASA Astrophysics Data System (ADS)

    Bohórquez, Jorge; Özdamar, Özcan; Morawski, Krzysztof; Telischi, Fred F.; Delgado, Rafael E.; Yavuz, Erdem

    2005-06-01

    A system capable of comprehensive and detailed monitoring of the cochlea and the auditory nerve during intraoperative surgery was developed. The cochlear blood flow (CBF) and the electrocochleogram (ECochGm) were recorded at the round window (RW) niche using a specially designed otic probe. The ECochGm was further processed to obtain cochlear microphonics (CM) and compound action potentials (CAP).The amplitude and phase of the CM were used to quantify the activity of outer hair cells (OHC); CAP amplitude and latency were used to describe the auditory nerve and the synaptic activity of the inner hair cells (IHC). In addition, concurrent monitoring with a second electrophysiological channel was achieved by recording compound nerve action potential (CNAP) obtained directly from the auditory nerve. Stimulation paradigms, instrumentation and signal processing methods were developed to extract and differentiate the activity of the OHC and the IHC in response to three different frequencies. Narrow band acoustical stimuli elicited CM signals indicating mainly nonlinear operation of the mechano-electrical transduction of the OHCs. Special envelope detectors were developed and applied to the ECochGm to extract the CM fundamental component and its harmonics in real time. The system was extensively validated in experimental animal surgeries by performing nerve compressions and manipulations.

  16. Seasonal Plasticity of Precise Spike Timing in the Avian Auditory System

    PubMed Central

    Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A.

    2015-01-01

    Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding. PMID:25716843

  17. Three-dimensional analysis of tubular permanent magnet machines

    NASA Astrophysics Data System (ADS)

    Chai, J.; Wang, J.; Howe, D.

    2006-04-01

    This paper presents results from a three-dimensional finite element analysis of a tubular permanent magnet machine, and quantifies the influence of the laminated modules from which the stator core is assembled on the flux linkage and thrust force capability as well as on the self- and mutual inductances. The three-dimensional finite element (FE) model accounts for the nonlinear, anisotropic magnetization characteristic of the laminated stator structure, and for the voids which exist between the laminated modules. Predicted results are compared with those deduced from an axisymmetric FE model. It is shown that the emf and thrust force deduced from the three-dimensional model are significantly lower than those which are predicted from an axisymmetric field analysis, primarily as a consequence of the teeth and yoke being more highly saturated due to the presence of the voids in the laminated stator core.

  18. Sensing Super-Position: Human Sensing Beyond the Visual Spectrum

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2007-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The human brain is superior to most existing computer systems in rapidly extracting relevant information from blurred, noisy, and redundant images. From a theoretical viewpoint, this means that the available bandwidth is not exploited in an optimal way. While image-processing techniques can manipulate, condense and focus the information (e.g., Fourier Transforms), keeping the mapping as direct and simple as possible might also reduce the risk of accidentally filtering out important clues. After all, especially a perfect non-redundant sound representation is prone to loss of relevant information in the non-perfect human hearing system. Also, a complicated non-redundant image-to-sound mapping may well be far more difficult to learn and comprehend than a straightforward mapping, while the mapping system would increase in complexity and cost. This work will demonstrate some basic information processing for optimal information capture for headmounted systems.

  19. Development of the acoustically evoked behavioral response in larval plainfin midshipman fish, Porichthys notatus.

    PubMed

    Alderks, Peter W; Sisneros, Joseph A

    2013-01-01

    The ontogeny of hearing in fishes has become a major interest among bioacoustics researchers studying fish behavior and sensory ecology. Most fish begin to detect acoustic stimuli during the larval stage which can be important for navigation, predator avoidance and settlement, however relatively little is known about the hearing capabilities of larval fishes. We characterized the acoustically evoked behavioral response (AEBR) in the plainfin midshipman fish, Porichthys notatus, and used this innate startle-like response to characterize this species' auditory capability during larval development. Age and size of larval midshipman were highly correlated (r(2) = 0.92). The AEBR was first observed in larvae at 1.4 cm TL. At a size ≥ 1.8 cm TL, all larvae responded to a broadband stimulus of 154 dB re1 µPa or -15.2 dB re 1 g (z-axis). Lowest AEBR thresholds were 140-150 dB re 1 µPa or -33 to -23 dB re 1 g for frequencies below 225 Hz. Larval fish with size ranges of 1.9-2.4 cm TL had significantly lower best evoked frequencies than the other tested size groups. We also investigated the development of the lateral line organ and its function in mediating the AEBR. The lateral line organ is likely involved in mediating the AEBR but not necessary to evoke the startle-like response. The midshipman auditory and lateral line systems are functional during early development when the larvae are in the nest and the auditory system appears to have similar tuning characteristics throughout all life history stages.

  20. Hearing of the African lungfish (Protopterus annectens) suggests underwater pressure detection and rudimentary aerial hearing in early tetrapods.

    PubMed

    Christensen, Christian Bech; Christensen-Dalsgaard, Jakob; Madsen, Peter Teglberg

    2015-02-01

    In the transition from an aquatic to a terrestrial lifestyle, vertebrate auditory systems have undergone major changes while adapting to aerial hearing. Lungfish are the closest living relatives of tetrapods and their auditory system may therefore be a suitable model of the auditory systems of early tetrapods such as Acanthostega. Therefore, experimental studies on the hearing capabilities of lungfish may shed light on the possible hearing capabilities of early tetrapods and broaden our understanding of hearing across the water-to-land transition. Here, we tested the hypotheses that (i) lungfish are sensitive to underwater pressure using their lungs as pressure-to-particle motion transducers and (ii) lungfish can detect airborne sound. To do so, we used neurophysiological recordings to estimate the vibration and pressure sensitivity of African lungfish (Protopterus annectens) in both water and air. We show that lungfish detect underwater sound pressure via pressure-to-particle motion transduction by air volumes in their lungs. The morphology of lungfish shows no specialized connection between these air volumes and the inner ears, and so our results imply that air breathing may have enabled rudimentary pressure detection as early as the Devonian era. Additionally, we demonstrate that lungfish in spite of their atympanic middle ear can detect airborne sound through detection of sound-induced head vibrations. This strongly suggests that even vertebrates with no middle ear adaptations for aerial hearing, such as the first tetrapods, had rudimentary aerial hearing that may have led to the evolution of tympanic middle ears in recent tetrapods. © 2015. Published by The Company of Biologists Ltd.

  1. Dysfunctional information processing during an auditory event-related potential task in individuals with Internet gaming disorder

    PubMed Central

    Park, M; Choi, J-S; Park, S M; Lee, J-Y; Jung, H Y; Sohn, B K; Kim, S N; Kim, D J; Kwon, J S

    2016-01-01

    Internet gaming disorder (IGD) leading to serious impairments in cognitive, psychological and social functions has gradually been increasing. However, very few studies conducted to date have addressed issues related to the event-related potential (ERP) patterns in IGD. Identifying the neurobiological characteristics of IGD is important to elucidate the pathophysiology of this condition. P300 is a useful ERP component for investigating electrophysiological features of the brain. The aims of the present study were to investigate differences between patients with IGD and healthy controls (HCs), with regard to the P300 component of the ERP during an auditory oddball task, and to examine the relationship of this component to the severity of IGD symptoms in identifying the relevant neurophysiological features of IGD. Twenty-six patients diagnosed with IGD and 23 age-, sex-, education- and intelligence quotient-matched HCs participated in this study. During an auditory oddball task, participants had to respond to the rare, deviant tones presented in a sequence of frequent, standard tones. The IGD group exhibited a significant reduction in response to deviant tones compared with the HC group in the P300 amplitudes at the midline centro-parietal electrode regions. We also found a negative correlation between the severity of IGD and P300 amplitudes. The reduced amplitude of the P300 component in an auditory oddball task may reflect dysfunction in auditory information processing and cognitive capabilities in IGD. These findings suggest that reduced P300 amplitudes may be candidate neurobiological marker for IGD. PMID:26812042

  2. Dysfunctional information processing during an auditory event-related potential task in individuals with Internet gaming disorder.

    PubMed

    Park, M; Choi, J-S; Park, S M; Lee, J-Y; Jung, H Y; Sohn, B K; Kim, S N; Kim, D J; Kwon, J S

    2016-01-26

    Internet gaming disorder (IGD) leading to serious impairments in cognitive, psychological and social functions has gradually been increasing. However, very few studies conducted to date have addressed issues related to the event-related potential (ERP) patterns in IGD. Identifying the neurobiological characteristics of IGD is important to elucidate the pathophysiology of this condition. P300 is a useful ERP component for investigating electrophysiological features of the brain. The aims of the present study were to investigate differences between patients with IGD and healthy controls (HCs), with regard to the P300 component of the ERP during an auditory oddball task, and to examine the relationship of this component to the severity of IGD symptoms in identifying the relevant neurophysiological features of IGD. Twenty-six patients diagnosed with IGD and 23 age-, sex-, education- and intelligence quotient-matched HCs participated in this study. During an auditory oddball task, participants had to respond to the rare, deviant tones presented in a sequence of frequent, standard tones. The IGD group exhibited a significant reduction in response to deviant tones compared with the HC group in the P300 amplitudes at the midline centro-parietal electrode regions. We also found a negative correlation between the severity of IGD and P300 amplitudes. The reduced amplitude of the P300 component in an auditory oddball task may reflect dysfunction in auditory information processing and cognitive capabilities in IGD. These findings suggest that reduced P300 amplitudes may be candidate neurobiological marker for IGD.

  3. Development of a test for recording both visual and auditory reaction times, potentially useful for future studies in patients on opioids therapy

    PubMed Central

    Miceli, Luca; Bednarova, Rym; Rizzardo, Alessandro; Samogin, Valentina; Della Rocca, Giorgio

    2015-01-01

    Objective Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app), assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual’s capacity to drive safely. Methods The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Results Performance is both age-related (r>0.5) and sex-related (female reaction times were significantly slower than those recorded for male subjects, P<0.05). Only 21% of the subjects were able to perform all four tests correctly. Conclusion We developed and fine-tuned a test called Safedrive that measures visual and auditory reaction times through a smartphone mobile device; the scope of the test is two-fold: to provide a clinical tool for the assessment of the driving capacity of individuals taking pain relief medication; to promote the sense of social responsibility in drivers who are on medication and provide these individuals with a means of testing their own capacity to drive safely. PMID:25709406

  4. Measuring the performance of visual to auditory information conversion.

    PubMed

    Tan, Shern Shiou; Maul, Tomás Henrique Bode; Mennie, Neil Russell

    2013-01-01

    Visual to auditory conversion systems have been in existence for several decades. Besides being among the front runners in providing visual capabilities to blind users, the auditory cues generated from image sonification systems are still easier to learn and adapt to compared to other similar techniques. Other advantages include low cost, easy customizability, and universality. However, every system developed so far has its own set of strengths and weaknesses. In order to improve these systems further, we propose an automated and quantitative method to measure the performance of such systems. With these quantitative measurements, it is possible to gauge the relative strengths and weaknesses of different systems and rank the systems accordingly. Performance is measured by both the interpretability and also the information preservation of visual to auditory conversions. Interpretability is measured by computing the correlation of inter image distance (IID) and inter sound distance (ISD) whereas the information preservation is computed by applying Information Theory to measure the entropy of both visual and corresponding auditory signals. These measurements provide a basis and some insights on how the systems work. With an automated interpretability measure as a standard, more image sonification systems can be developed, compared, and then improved. Even though the measure does not test systems as thoroughly as carefully designed psychological experiments, a quantitative measurement like the one proposed here can compare systems to a certain degree without incurring much cost. Underlying this research is the hope that a major breakthrough in image sonification systems will allow blind users to cost effectively regain enough visual functions to allow them to lead secure and productive lives.

  5. Multisensory Integration Strategy for Modality-Specific Loss of Inhibition Control in Older Adults

    PubMed Central

    Ryu, Hokyoung; Kim, Jae-Kwan; Jeong, Eunju

    2018-01-01

    Older adults are known to have lesser cognitive control capability and greater susceptibility to distraction than young adults. Previous studies have reported age-related problems in selective attention and inhibitory control, yielding mixed results depending on modality and context in which stimuli and tasks were presented. The purpose of the study was to empirically demonstrate a modality-specific loss of inhibitory control in processing audio-visual information with ageing. A group of 30 young adults (mean age = 25.23, Standard Deviation (SD) = 1.86) and 22 older adults (mean age = 55.91, SD = 4.92) performed the audio-visual contour identification task (AV-CIT). We compared performance of visual/auditory identification (Uni-V, Uni-A) with that of visual/auditory identification in the presence of distraction in counterpart modality (Multi-V, Multi-A). The findings showed a modality-specific effect on inhibitory control. Uni-V performance was significantly better than Multi-V, indicating that auditory distraction significantly hampered visual target identification. However, Multi-A performance was significantly enhanced compared to Uni-A, indicating that auditory target performance was significantly enhanced by visual distraction. Additional analysis showed an age-specific effect on enhancement between Uni-A and Multi-A depending on the level of visual inhibition. Together, our findings indicated that the loss of visual inhibitory control was beneficial for the auditory target identification presented in a multimodal context in older adults. A likely multisensory information processing strategy in the older adults was further discussed in relation to aged cognition. PMID:29641462

  6. Development of a test for recording both visual and auditory reaction times, potentially useful for future studies in patients on opioids therapy.

    PubMed

    Miceli, Luca; Bednarova, Rym; Rizzardo, Alessandro; Samogin, Valentina; Della Rocca, Giorgio

    2015-01-01

    Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app), assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual's capacity to drive safely. The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Performance is both age-related (r>0.5) and sex-related (female reaction times were significantly slower than those recorded for male subjects, P<0.05). Only 21% of the subjects were able to perform all four tests correctly. We developed and fine-tuned a test called Safedrive that measures visual and auditory reaction times through a smartphone mobile device; the scope of the test is two-fold: to provide a clinical tool for the assessment of the driving capacity of individuals taking pain relief medication; to promote the sense of social responsibility in drivers who are on medication and provide these individuals with a means of testing their own capacity to drive safely.

  7. A California sea lion (Zalophus californianus) can keep the beat: motor entrainment to rhythmic auditory stimuli in a non vocal mimic.

    PubMed

    Cook, Peter; Rouse, Andrew; Wilson, Margaret; Reichmuth, Colleen

    2013-11-01

    Is the ability to entrain motor activity to a rhythmic auditory stimulus, that is "keep a beat," dependent on neural adaptations supporting vocal mimicry? That is the premise of the vocal learning and synchronization hypothesis, recently advanced to explain the basis of this behavior (A. Patel, 2006, Musical Rhythm, Linguistic Rhythm, and Human Evolution, Music Perception, 24, 99-104). Prior to the current study, only vocal mimics, including humans, cockatoos, and budgerigars, have been shown to be capable of motoric entrainment. Here we demonstrate that a less vocally flexible animal, a California sea lion (Zalophus californianus), can learn to entrain head bobbing to an auditory rhythm meeting three criteria: a behavioral response that does not reproduce the stimulus; performance transfer to a range of novel tempos; and entrainment to complex, musical stimuli. These findings show that the capacity for entrainment of movement to rhythmic sounds does not depend on a capacity for vocal mimicry, and may be more widespread in the animal kingdom than previously hypothesized.

  8. Probing sensorimotor integration during musical performance.

    PubMed

    Furuya, Shinichi; Furukawa, Yuta; Uehara, Kazumasa; Oku, Takanori

    2018-03-10

    An integration of afferent sensory information from the visual, auditory, and proprioceptive systems into execution and update of motor programs plays crucial roles in control and acquisition of skillful sequential movements in musical performance. However, conventional behavioral and neurophysiological techniques that have been applied to study simplistic motor behaviors limit elucidating online sensorimotor integration processes underlying skillful musical performance. Here, we propose two novel techniques that were developed to investigate the roles of auditory and proprioceptive feedback in piano performance. First, a closed-loop noninvasive brain stimulation system that consists of transcranial magnetic stimulation, a motion sensor, and a microcomputer enabled to assess time-varying cortical processes subserving auditory-motor integration during piano playing. Second, a force-field system capable of manipulating the weight of a piano key allowed for characterizing movement adaptation based on the feedback obtained, which can shed light on the formation of an internal representation of the piano. Results of neurophysiological and psychophysics experiments provided evidence validating these systems as effective means for disentangling computational and neural processes of sensorimotor integration in musical performance. © 2018 New York Academy of Sciences.

  9. Auditory evoked potential (AEP) measurements in stranded rough-toothed dolphins (Steno bredanensis)

    NASA Astrophysics Data System (ADS)

    Cook, Mandy L. H.; Manire, Charles A.; Mann, David A.

    2005-04-01

    Thirty-six rough-toothed dolphins (Steno bredanensis) live-stranded on Hutchinson Island, FL on August 6, 2004. Seven animals were transported to Mote Marine Laboratory for rehabilitation. Two auditory evoked potential (AEP) measurements were performed on each of five of these dolphins in air using a jawphone to present acoustic stimuli. Modulation rate transfer functions (MRTFs) were measured to establish how well the auditory system follows the temporal envelope of acoustic stimuli. A 40 kHz stimulus carrier was amplitude modulated (AM) with varying rates ranging from 200 Hz to 1800 Hz, in 200 Hz steps. The best AM-rate from the first dolphin tested was 1500 Hz. This AM rate was used in subsequent AEP measurements to determine evoked-potential hearing thresholds between 5000 and 80000 Hz. These findings show that rough-toothed dolphins can detect sounds between 5 and 80 kHz, and are most likely capable of detecting frequencies much higher than 80 kHz. MRTF data suggest that rough-toothed dolphins have a high temporal resolution, similar to that of other cetaceans.

  10. Signal Processing in Periodically Forced Gradient Frequency Neural Networks

    PubMed Central

    Kim, Ji Chul; Large, Edward W.

    2015-01-01

    Oscillatory instability at the Hopf bifurcation is a dynamical phenomenon that has been suggested to characterize active non-linear processes observed in the auditory system. Networks of oscillators poised near Hopf bifurcation points and tuned to tonotopically distributed frequencies have been used as models of auditory processing at various levels, but systematic investigation of the dynamical properties of such oscillatory networks is still lacking. Here we provide a dynamical systems analysis of a canonical model for gradient frequency neural networks driven by a periodic signal. We use linear stability analysis to identify various driven behaviors of canonical oscillators for all possible ranges of model and forcing parameters. The analysis shows that canonical oscillators exhibit qualitatively different sets of driven states and transitions for different regimes of model parameters. We classify the parameter regimes into four main categories based on their distinct signal processing capabilities. This analysis will lead to deeper understanding of the diverse behaviors of neural systems under periodic forcing and can inform the design of oscillatory network models of auditory signal processing. PMID:26733858

  11. The Dolphin Sonar: Excellent Capabilities In Spite of Some Mediocre Properties

    NASA Astrophysics Data System (ADS)

    Au, Whitlow W. L.

    2004-11-01

    Dolphin sonar research has been conducted for several decades and much has been learned about the capabilities of echolocating dolphins to detect, discriminate and recognize underwater targets. The results of these research projects suggest that dolphins possess the most sophisticated of all sonar for short ranges and shallow water where reverberation and clutter echoes are high. The critical feature of the dolphin sonar is the capability of discriminating and recognizing complex targets in a highly reverberant and noisy environment. The dolphin's detection threshold in reverberation occurs at a echo-to reverberation ratio of approximately 4 dB. Echolocating dolphins also have the capability to make fine discriminate of target properties such as wall thickness difference of water-filled cylinders and material differences in metallic plates. The high-resolution property of the animal's echolocation signals and the high dynamic range of its auditory system are important factors in their outstanding discrimination capabilities. In the wall thickness discrimination of cylinder experiment, time differences between echo highlights at small as 500-600 ns can be resolved by echolocating dolphins. Measurements of the targets used in the metallic plate composition experiment suggest that dolphins attended to echo components that were 20-30 dB below the maximum level for a specific target. It is interesting to realize that some of the properties of the dolphin sonar system are fairly mediocre, yet the total performance of the system is often outstanding. When compared to some technological sonar, the energy content of the dolphin sonar signal is not very high, the transmission and receiving beamwidths are fairly large, and the auditory filters are not very narrow. Yet the dolphin sonar has demonstrated excellent capabilities in spite the mediocre features of its "hardware." Reasons why dolphins can perform complex sonar task will be discussed in light of the "equipment" they possess.

  12. Is there a best side for cochlear implants in post-lingual patients?

    PubMed

    Amaral, Maria Stella Arantes do; Damico, Thiago A; Gonçales, Alina S; Reis, Ana C M B; Isaac, Myriam de Lima; Massuda, Eduardo T; Hyppolito, Miguel Angelo

    2017-07-29

    Cochlear Implant is a sensory prosthesis capable of restoring hearing in patients with severe or profound bilateral sensorineural hearing loss. To evaluate if there is a better side to be implanted in post-lingual patients. Retrospective longitudinal study. Participants were 40 subjects, of both sex, mean age of 47 years, with post-lingual hearing loss, users of unilateral cochlear implant for more than 12 months and less than 24 months, with asymmetric auditor reserve between the ears (difference of 10dBNA, In at least one of the frequencies with a response, between the ears), divided into two groups. Group A was composed of individuals with cochlear implant in the ear with better auditory reserve and Group B with auditory reserve lower in relation to the contralateral side. There was no statistical difference for the tonal auditory threshold before and after cochlear implant. A better speech perception in pre-cochlear implant tests was present in B (20%), but the final results are similar in both groups. The cochlear implant in the ear with the worst auditory residue favors a bimodal hearing, which would allow the binaural summation, without compromising the improvement of the audiometric threshold and the speech perception. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  13. Neuroestrogen signaling in the songbird auditory cortex propagates into a sensorimotor network via an `interface' nucleus

    PubMed Central

    Pawlisch, Benjamin A.; Remage-Healey, Luke

    2014-01-01

    Neuromodulators rapidly alter activity of neural circuits and can therefore shape higher-order functions, such as sensorimotor integration. Increasing evidence suggests that brain-derived estrogens, such as 17-β-estradiol, can act rapidly to modulate sensory processing. However, less is known about how rapid estrogen signaling can impact downstream circuits. Past studies have demonstrated that estradiol levels increase within the songbird auditory cortex (the caudomedial nidopallium, NCM) during social interactions. Local estradiol signaling enhances the auditory-evoked firing rate of neurons in NCM to a variety of stimuli, while also enhancing the selectivity of auditory-evoked responses of neurons in a downstream sensorimotor brain region, HVC (proper name). Since these two brain regions are not directly connected, we employed dual extracellular recordings in HVC and the upstream nucleus interfacialis of the nidopallium (NIf) during manipulations of estradiol within NCM to better understand the pathway by which estradiol signaling propagates to downstream circuits. NIf has direct input into HVC, passing auditory information into the vocal motor output pathway, and is a possible source of the neural selectivity within HVC. Here, during acute estradiol administration in NCM, NIf neurons showed increases in baseline firing rates and auditory-evoked firing rates to all stimuli. Furthermore, when estradiol synthesis was blocked in NCM, we observed simultaneous decreases in the selectivity of NIf and HVC neurons. These effects were not due to direct estradiol actions because NIf has little to no capability for local estrogen synthesis or estrogen receptors, and these effects were specific to NIf because other neurons immediately surrounding NIf did not show these changes. Our results demonstrate that transsynaptic, rapid fluctuations in neuroestrogens are transmitted into NIf and subsequently HVC, both regions important for sensorimotor integration. Overall, these findings support the hypothesis that acute neurosteroid actions can propagate within and between neural circuits to modulate their functional connectivity. PMID:25453773

  14. Neuroestrogen signaling in the songbird auditory cortex propagates into a sensorimotor network via an 'interface' nucleus.

    PubMed

    Pawlisch, B A; Remage-Healey, L

    2015-01-22

    Neuromodulators rapidly alter activity of neural circuits and can therefore shape higher order functions, such as sensorimotor integration. Increasing evidence suggests that brain-derived estrogens, such as 17-β-estradiol, can act rapidly to modulate sensory processing. However, less is known about how rapid estrogen signaling can impact downstream circuits. Past studies have demonstrated that estradiol levels increase within the songbird auditory cortex (the caudomedial nidopallium, NCM) during social interactions. Local estradiol signaling enhances the auditory-evoked firing rate of neurons in NCM to a variety of stimuli, while also enhancing the selectivity of auditory-evoked responses of neurons in a downstream sensorimotor brain region, HVC (proper name). Since these two brain regions are not directly connected, we employed dual extracellular recordings in HVC and the upstream nucleus interfacialis of the nidopallium (NIf) during manipulations of estradiol within NCM to better understand the pathway by which estradiol signaling propagates to downstream circuits. NIf has direct input into HVC, passing auditory information into the vocal motor output pathway, and is a possible source of the neural selectivity within HVC. Here, during acute estradiol administration in NCM, NIf neurons showed increases in baseline firing rates and auditory-evoked firing rates to all stimuli. Furthermore, when estradiol synthesis was blocked in NCM, we observed simultaneous decreases in the selectivity of NIf and HVC neurons. These effects were not due to direct estradiol actions because NIf has little to no capability for local estrogen synthesis or estrogen receptors, and these effects were specific to NIf because other neurons immediately surrounding NIf did not show these changes. Our results demonstrate that transsynaptic, rapid fluctuations in neuroestrogens are transmitted into NIf and subsequently HVC, both regions important for sensorimotor integration. Overall, these findings support the hypothesis that acute neurosteroid actions can propagate within and between neural circuits to modulate their functional connectivity. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Independent component analysis for cochlear implant artifacts attenuation from electrically evoked auditory steady-state response measurements

    NASA Astrophysics Data System (ADS)

    Deprez, Hanne; Gransier, Robin; Hofmann, Michael; van Wieringen, Astrid; Wouters, Jan; Moonen, Marc

    2018-02-01

    Objective. Electrically evoked auditory steady-state responses (EASSRs) are potentially useful for objective cochlear implant (CI) fitting and follow-up of the auditory maturation in infants and children with a CI. EASSRs are recorded in the electro-encephalogram (EEG) in response to electrical stimulation with continuous pulse trains, and are distorted by significant CI artifacts related to this electrical stimulation. The aim of this study is to evaluate a CI artifacts attenuation method based on independent component analysis (ICA) for three EASSR datasets. Approach. ICA has often been used to remove CI artifacts from the EEG to record transient auditory responses, such as cortical evoked auditory potentials. Independent components (ICs) corresponding to CI artifacts are then often manually identified. In this study, an ICA based CI artifacts attenuation method was developed and evaluated for EASSR measurements with varying CI artifacts and EASSR characteristics. Artifactual ICs were automatically identified based on their spectrum. Main results. For 40 Hz amplitude modulation (AM) stimulation at comfort level, in high SNR recordings, ICA succeeded in removing CI artifacts from all recording channels, without distorting the EASSR. For lower SNR recordings, with 40 Hz AM stimulation at lower levels, or 90 Hz AM stimulation, ICA either distorted the EASSR or could not remove all CI artifacts in most subjects, except for two of the seven subjects tested with low level 40 Hz AM stimulation. Noise levels were reduced after ICA was applied, and up to 29 ICs were rejected, suggesting poor ICA separation quality. Significance. We hypothesize that ICA is capable of separating CI artifacts and EASSR in case the contralateral hemisphere is EASSR dominated. For small EASSRs or large CI artifact amplitudes, ICA separation quality is insufficient to ensure complete CI artifacts attenuation without EASSR distortion.

  16. Ontogenetic investigation of underwater hearing capabilities in loggerhead sea turtles (Caretta caretta) using a dual testing approach.

    PubMed

    Lavender, Ashley L; Bartol, Soraya M; Bartol, Ian K

    2014-07-15

    Sea turtles reside in different acoustic environments with each life history stage and may have different hearing capacity throughout ontogeny. For this study, two independent yet complementary techniques for hearing assessment, i.e. behavioral and electrophysiological audiometry, were employed to (1) measure hearing in post-hatchling and juvenile loggerhead sea turtles Caretta caretta (19-62 cm straight carapace length) to determine whether these migratory turtles exhibit an ontogenetic shift in underwater auditory detection and (2) evaluate whether hearing frequency range and threshold sensitivity are consistent in behavioral and electrophysiological tests. Behavioral trials first required training turtles to respond to known frequencies, a multi-stage, time-intensive process, and then recording their behavior when they were presented with sound stimuli from an underwater speaker using a two-response forced-choice paradigm. Electrophysiological experiments involved submerging restrained, fully conscious turtles just below the air-water interface and recording auditory evoked potentials (AEPs) when sound stimuli were presented using an underwater speaker. No significant differences in behavior-derived auditory thresholds or AEP-derived auditory thresholds were detected between post-hatchling and juvenile sea turtles. While hearing frequency range (50-1000/1100 Hz) and highest sensitivity (100-400 Hz) were consistent in audiograms pooled by size class for both behavior and AEP experiments, both post-hatchlings and juveniles had significantly higher AEP-derived than behavior-derived auditory thresholds, indicating that behavioral assessment is a more sensitive testing approach. The results from this study suggest that post-hatchling and juvenile loggerhead sea turtles are low-frequency specialists, exhibiting little differences in threshold sensitivity and frequency bandwidth despite residence in acoustically distinct environments throughout ontogeny. © 2014. Published by The Company of Biologists Ltd.

  17. Bilateral capacity for speech sound processing in auditory comprehension: evidence from Wada procedures.

    PubMed

    Hickok, G; Okada, K; Barr, W; Pa, J; Rogalsky, C; Donnelly, K; Barde, L; Grant, A

    2008-12-01

    Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, than-phonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.

  18. Development of the Acoustically Evoked Behavioral Response in Larval Plainfin Midshipman Fish, Porichthys notatus

    PubMed Central

    Alderks, Peter W.; Sisneros, Joseph A.

    2013-01-01

    The ontogeny of hearing in fishes has become a major interest among bioacoustics researchers studying fish behavior and sensory ecology. Most fish begin to detect acoustic stimuli during the larval stage which can be important for navigation, predator avoidance and settlement, however relatively little is known about the hearing capabilities of larval fishes. We characterized the acoustically evoked behavioral response (AEBR) in the plainfin midshipman fish, Porichthys notatus, and used this innate startle-like response to characterize this species' auditory capability during larval development. Age and size of larval midshipman were highly correlated (r2 = 0.92). The AEBR was first observed in larvae at 1.4 cm TL. At a size ≥1.8 cm TL, all larvae responded to a broadband stimulus of 154 dB re1 µPa or −15.2 dB re 1 g (z-axis). Lowest AEBR thresholds were 140–150 dB re 1 µPa or −33 to −23 dB re 1 g for frequencies below 225 Hz. Larval fish with size ranges of 1.9–2.4 cm TL had significantly lower best evoked frequencies than the other tested size groups. We also investigated the development of the lateral line organ and its function in mediating the AEBR. The lateral line organ is likely involved in mediating the AEBR but not necessary to evoke the startle-like response. The midshipman auditory and lateral line systems are functional during early development when the larvae are in the nest and the auditory system appears to have similar tuning characteristics throughout all life history stages. PMID:24340003

  19. From Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language

    PubMed Central

    Poliva, Oren

    2016-01-01

    The auditory cortex communicates with the frontal lobe via the middle temporal gyrus (auditory ventral stream; AVS) or the inferior parietal lobule (auditory dorsal stream; ADS). Whereas the AVS is ascribed only with sound recognition, the ADS is ascribed with sound localization, voice detection, prosodic perception/production, lip-speech integration, phoneme discrimination, articulation, repetition, phonological long-term memory and working memory. Previously, I interpreted the juxtaposition of sound localization, voice detection, audio-visual integration and prosodic analysis, as evidence that the behavioral precursor to human speech is the exchange of contact calls in non-human primates. Herein, I interpret the remaining ADS functions as evidence of additional stages in language evolution. According to this model, the role of the ADS in vocal control enabled early Homo (Hominans) to name objects using monosyllabic calls, and allowed children to learn their parents' calls by imitating their lip movements. Initially, the calls were forgotten quickly but gradually were remembered for longer periods. Once the representations of the calls became permanent, mimicry was limited to infancy, and older individuals encoded in the ADS a lexicon for the names of objects (phonological lexicon). Consequently, sound recognition in the AVS was sufficient for activating the phonological representations in the ADS and mimicry became independent of lip-reading. Later, by developing inhibitory connections between acoustic-syllabic representations in the AVS and phonological representations of subsequent syllables in the ADS, Hominans became capable of concatenating the monosyllabic calls for repeating polysyllabic words (i.e., developed working memory). Finally, due to strengthening of connections between phonological representations in the ADS, Hominans became capable of encoding several syllables as a single representation (chunking). Consequently, Hominans began vocalizing and mimicking/rehearsing lists of words (sentences). PMID:27445676

  20. Hair cell heterogeneity and ultrasonic hearing: recent advances in understanding fish hearing.

    PubMed Central

    Popper, A N

    2000-01-01

    The past decade has seen a wealth of new data on the auditory capabilities and mechanisms of fishes. We now have a significantly better appreciation of the structure and function of the auditory system in fishes with regard to their peripheral and central anatomy, physiology, behaviour, sound source localization and hearing capabilities. This paper deals with two of the newest of these findings, hair cell heterogeneity and the detection of ultrasound. As a result of this recent work, we now know that fishes have several different types of sensory hair cells in both the ear and lateral line and there is a growing body of evidence to suggest that these hair cell types arose very early in the evolution of the octavolateralis system. There is also some evidence to suggest that the differences in the hair cell types have functional implications for the way the ear and lateral line of fishes detect and process stimuli. Behavioural studies have shown that, whereas most fishes can only detect sound to 1-3 kHz, several species of the genus Alosa (Clupeiformes, i.e. herrings and their relatives) can detect sounds up to 180 kHz (or even higher). It is suggested that this capability evolved so that these fishes can detect one of their major predators, echolocating dolphins. The mechanism for ultrasound detection remains obscure, though it is hypothesized that the highly derived utricle of the inner ear in these species is involved. PMID:11079414

  1. Intracochlear Drug Delivery Systems

    PubMed Central

    Borenstein, Jeffrey T.

    2011-01-01

    Introduction Advances in molecular biology and in the basic understanding of the mechanisms associated with sensorineural hearing loss and other diseases of the inner ear, are paving the way towards new approaches for treatments for millions of patients. However, the cochlea is a particularly challenging target for drug therapy, and new technologies will be required to provide safe and efficacious delivery of these compounds. Emerging delivery systems based on microfluidic technologies are showing promise as a means for direct intracochlear delivery. Ultimately, these systems may serve as a means for extended delivery of regenerative compounds to restore hearing in patients suffering from a host of auditory diseases. Areas covered in this review Recent progress in the development of drug delivery systems capable of direct intracochlear delivery is reviewed, including passive systems such as osmotic pumps, active microfluidic devices, and systems combined with currently available devices such as cochlear implants. The aim of this article is to provide a concise review of intracochlear drug delivery systems currently under development, and ultimately capable of being combined with emerging therapeutic compounds for the treatment of inner ear diseases. Expert Opinion Safe and efficacious treatment of auditory diseases will require the development of microscale delivery devices, capable of extended operation and direct application to the inner ear. These advances will require miniaturization and integration of multiple functions, including drug storage, delivery, power management and sensing, ultimately enabling closed-loop control and timed-sequence delivery devices for treatment of these diseases. PMID:21615213

  2. Mapping Directly Imaged Giant Exoplanets

    NASA Astrophysics Data System (ADS)

    Kostov, Veselin; Apai, Dániel

    2013-01-01

    With the increasing number of directly imaged giant exoplanets, the current atmosphere models are often not capable of fully explaining the spectra and luminosity of the sources. A particularly challenging component of the atmosphere models is the formation and properties of condensate cloud layers, which fundamentally impact the energetics, opacity, and evolution of the planets. Here we present a suite of techniques that can be used to estimate the level of rotational modulations these planets may show. We propose that the time-resolved observations of such periodic photometric and spectroscopic variations of extrasolar planets due to their rotation can be used as a powerful tool to probe the heterogeneity of their optical surfaces. In this paper, we develop simulations to explore the capabilities of current and next-generation ground- and space-based instruments for this technique. We address and discuss the following questions: (1) what planet properties can be deduced from the light curve and/or spectra, and in particular can we determine rotation periods, spot coverage, spot colors, and spot spectra?; (2) what is the optimal configuration of instrument/wavelength/temporal sampling required for these measurements?; and (3) can principal component analysis be used to invert the light curve and deduce the surface map of the planet? Our simulations describe the expected spectral differences between homogeneous (clear or cloudy) and patchy atmospheres, outline the significance of the dominant absorption features of H2O, CH4, and CO, and provide a method to distinguish these two types of atmospheres. Assuming surfaces with and without clouds for most currently imaged planets the current models predict the largest variations in the J band. Simulated photometry from current and future instruments is used to estimate the level of detectable photometric variations. We conclude that future instruments will be able to recover not only the rotation periods, cloud cover, cloud colors, and spectra but even cloud evolution. We also show that a longitudinal map of the planet's atmosphere can be deduced from its disk-integrated light curves.

  3. Auditory Localization Performance with Asymmetric Integrated Eye and Ear Protection

    DTIC Science & Technology

    2018-03-01

    prototypes. The AIEEP is a tactical communications and protection system (TCAPS) that also provides eye protection. Participants used a laser pointer...difference cues ......... 3 Fig. 3 Loudspeaker configuration in the dome room: Method 2 testing incorporates 36 loudspeakers spaced at even intervals of 10...attenuation capabilities and electronic limiters that suppress transmission of impulsive and high -level steady-state noise. In addition, the manufacturer

  4. Command History OPNAV 5750-1 Fiscal Year 2004

    DTIC Science & Technology

    2006-05-04

    highly capable facilities including three hyperbaric 2 chambers, anechoic chambers, auditory and vision laboratories, closed atmosphere test room...3 Hyperbaric Chambers (1 Saturation) • 1000m3 Anechoic Chamber • 140m3 Reverberant Chamber • 10 Audio Testing Booths • Vision Research...Using Hand-Held Personal Digital Assistants (PDAs) in a Hyperbaric Environment and the PDA-based Submarine Escape and Rescue Calculator and

  5. Neurophysiological Estimates of Human Performance Capabilities in Aerospace Systems

    DTIC Science & Technology

    1975-01-27

    effects on the visual system (in lateral geniculate bodies and optic cortex) depending on the frequency of auditory stimulation. 27 SECTION VI...of spa- tial positions. Correct responses were rewarded with food. EEG activity was recorded in the hippocampus, hypothalamus and lateral geniculate ...movement or an object movement reduce transmission of visual information through the lateral geniculate nucleus. This may be a mechanism for saccadic

  6. Affect in Human-Robot Interaction

    DTIC Science & Technology

    2014-01-01

    is capable of learning and producing a large number of facial expressions based on Ekman’s Facial Action Coding System, FACS (Ekman and Friesen 1978... tactile (pushed, stroked, etc.), auditory (loud sound), temperature and olfactory (alcohol, smoke, etc.). The personality of the robot consists of...robot’s behavior through decision-making, learning , or action selection, a number of researchers used the fuzzy logic approach to emotion generation

  7. Intrusion recognition for optic fiber vibration sensor based on the selective attention mechanism

    NASA Astrophysics Data System (ADS)

    Xu, Haiyan; Xie, Yingjuan; Li, Min; Zhang, Zhuo; Zhang, Xuewu

    2017-11-01

    Distributed fiber-optic vibration sensors receive extensive investigation and play a significant role in the sensor panorama. A fiber optic perimeter detection system based on all-fiber interferometric sensor is proposed, through the back-end analysis, processing and intelligent identification, which can distinguish effects of different intrusion activities. In this paper, an intrusion recognition based on the auditory selective attention mechanism is proposed. Firstly, considering the time-frequency of vibration, the spectrogram is calculated. Secondly, imitating the selective attention mechanism, the color, direction and brightness map of the spectrogram is computed. Based on these maps, the feature matrix is formed after normalization. The system could recognize the intrusion activities occurred along the perimeter sensors. Experiment results show that the proposed method for the perimeter is able to differentiate intrusion signals from ambient noises. What's more, the recognition rate of the system is improved while deduced the false alarm rate, the approach is proved by large practical experiment and project.

  8. Speech Intelligibility Predicted from Neural Entrainment of the Speech Envelope.

    PubMed

    Vanthornhout, Jonas; Decruy, Lien; Wouters, Jan; Simon, Jonathan Z; Francart, Tom

    2018-04-01

    Speech intelligibility is currently measured by scoring how well a person can identify a speech signal. The results of such behavioral measures reflect neural processing of the speech signal, but are also influenced by language processing, motivation, and memory. Very often, electrophysiological measures of hearing give insight in the neural processing of sound. However, in most methods, non-speech stimuli are used, making it hard to relate the results to behavioral measures of speech intelligibility. The use of natural running speech as a stimulus in electrophysiological measures of hearing is a paradigm shift which allows to bridge the gap between behavioral and electrophysiological measures. Here, by decoding the speech envelope from the electroencephalogram, and correlating it with the stimulus envelope, we demonstrate an electrophysiological measure of neural processing of running speech. We show that behaviorally measured speech intelligibility is strongly correlated with our electrophysiological measure. Our results pave the way towards an objective and automatic way of assessing neural processing of speech presented through auditory prostheses, reducing confounds such as attention and cognitive capabilities. We anticipate that our electrophysiological measure will allow better differential diagnosis of the auditory system, and will allow the development of closed-loop auditory prostheses that automatically adapt to individual users.

  9. Sound localization and auditory response capabilities in round goby (Neogobius melanostomus)

    NASA Astrophysics Data System (ADS)

    Rollo, Audrey K.; Higgs, Dennis M.

    2005-04-01

    A fundamental role in vertebrate auditory systems is determining the direction of a sound source. While fish show directional responses to sound, sound localization remains in dispute. The species used in the current study, Neogobius melanostomus (round goby) uses sound in reproductive contexts, with both male and female gobies showing directed movement towards a calling male. The two-choice laboratory experiment was used (active versus quiet speaker) to analyze behavior of gobies in response to sound stimuli. When conspecific male spawning sounds were played, gobies moved in a direct path to the active speaker, suggesting true localization to sound. Of the animals that responded to conspecific sounds, 85% of the females and 66% of the males moved directly to the sound source. Auditory playback of natural and synthetic sounds showed differential behavioral specificity. Of gobies that responded, 89% were attracted to the speaker playing Padogobius martensii sounds, 87% to 100 Hz tone, 62% to white noise, and 56% to Gobius niger sounds. Swimming speed, as well as mean path angle to the speaker, will be presented during the presentation. Results suggest a strong localization of the round goby to a sound source, with some differential sound specificity.

  10. Apparatus for providing sensory substitution of force feedback

    NASA Technical Reports Server (NTRS)

    Massimino, Michael J. (Inventor); Sheridan, Thomas B. (Inventor)

    1995-01-01

    A feedback apparatus for an operator to control an effector that is remote from the operator to interact with a remote environment has a local input device to be manipulated by the operator. Sensors in the effector's environment are capable of sensing the amplitude of forces arising between the effector and its environment, the direction of application of such forces, or both amplitude and direction. A feedback signal corresponding to such a component of the force, is generated and transmitted to the environment of the operator. The signal is transduced into an auditory sensory substitution signal to which the operator is sensitive. Sound production apparatus present the auditory signal to the operator. The full range of the force amplitude may be represented by a single, audio speaker. Auditory display elements may be stereo headphones or free standing audio speakers, numbering from one to many more than two. The location of the application of the force may also be specified by the location of audio speakers that generate signals corresponding to specific forces. Alternatively, the location may be specified by the frequency of an audio signal, or by the apparent location of an audio signal, as simulated by a combination of signals originating at different locations.

  11. Auditory performance in an open sound field

    NASA Astrophysics Data System (ADS)

    Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy

    2003-04-01

    Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.

  12. Mismatch Negativity (MMN) as an Index of Cognitive Dysfunction

    PubMed Central

    Näätänen, Risto; Sussman, Elyse S.; Salisbury, Dean; Shafer, Valerie L.

    2014-01-01

    Cognition is often affected in a variety of neuropsychiatric, neurological, and neurodevelopmental disorders. The neural discriminative response, reflected in mismatch negativity (MMN) and its magnetoencephalographic equivalent (MMNm), has been used as a tool to study a variety of disorders involving auditory cognition. MMN/MMNm is an involuntary brain response to auditory change or, more generally, to pattern regularity violation. For a number of disorders, MMN/MMNm amplitude to sound deviance has been shown to be attenuated or the peak-latency of the component prolonged compared to controls. This general finding suggests that while not serving as a specific marker to any particular disorder, MMN may be useful for understanding factors of cognition in various disorders, and has potential to serve as an indicator of risk. This review presents a brief history of the MMN, followed by a description of how MMN has been used to index auditory processing capability in a range of neuropsychiatric, neurological, and neurodevelopmental disorders. Finally, we suggest future directions for research to further enhance our understanding of the neural substrate of deviance detection that could lead to improvements in the use of MMN as a clinical tool. PMID:24838819

  13. Decreased echolocation performance following high-frequency hearing loss in the false killer whale (Pseudorca crassidens).

    PubMed

    Kloepper, L N; Nachtigall, P E; Gisiner, R; Breese, M

    2010-11-01

    Toothed whales and dolphins possess a hypertrophied auditory system that allows for the production and hearing of ultrasonic signals. Although the fossil record provides information on the evolution of the auditory structures found in extant odontocetes, it cannot provide information on the evolutionary pressures leading to the hypertrophied auditory system. Investigating the effect of hearing loss may provide evidence for the reason for the development of high-frequency hearing in echolocating animals by demonstrating how high-frequency hearing assists in the functioning echolocation system. The discrimination abilities of a false killer whale (Pseudorca crassidens) were measured prior to and after documented high-frequency hearing loss. In 1992, the subject had good hearing and could hear at frequencies up to 100 kHz. In 2008, the subject had lost hearing at frequencies above 40 kHz. First in 1992, and then again in 2008, the subject performed an identical echolocation task, discriminating between machined hollow aluminum cylinder targets of differing wall thickness. Performances were recorded for individual target differences and compared between both experimental years. Performances on individual targets dropped between 1992 and 2008, with a maximum performance reduction of 36.1%. These data indicate that, with a loss in high-frequency hearing, there was a concomitant reduction in echolocation discrimination ability, and suggest that the development of a hypertrophied auditory system capable of hearing at ultrasonic frequencies evolved in response to pressures for fine-scale echolocation discrimination.

  14. The Persian version of auditory word discrimination test (P-AWDT) for children: Development, validity, and reliability.

    PubMed

    Hashemi, Nassim; Ghorbani, Ali; Soleymani, Zahra; Kamali, Mohmmad; Ahmadi, Zohreh Ziatabar; Mahmoudian, Saeid

    2018-07-01

    Auditory discrimination of speech sounds is an important perceptual ability and a precursor to the acquisition of language. Auditory information is at least partially necessary for the acquisition and organization of phonological rules. There are few standardized behavioral tests to evaluate phonemic distinctive features in children with or without speech and language disorders. The main objective of the present study was the development, validity, and reliability of the Persian version of auditory word discrimination test (P-AWDT) for 4-8-year-old children. A total of 120 typical children and 40 children with speech sound disorder (SSD) participated in the present study. The test comprised of 160 monosyllabic paired-words distributed in the Forms A-1 and the Form A-2 for the initial consonants (80 words) and the Forms B-1 and the Form B-2 for the final consonants (80 words). Moreover, the discrimination of vowels was randomly included in all forms. Content validity was calculated and 50 children repeated the test twice with two weeks of interval (test-retest reliability). Further analysis was also implemented including validity, intraclass correlation coefficient (ICC), Cronbach's alpha (internal consistency), age groups, and gender. The content validity index (CVI) and the test-retest reliability of the P-AWDT were achieved 63%-86% and 81%-96%, respectively. Moreover, the total Cronbach's alpha for the internal consistency was estimated relatively high (0.93). Comparison of the mean scores of the P-AWDT in the typical children and the children with SSD revealed a significant difference. The results revealed that the group with SSD had greater severity of deficit than the typical group in auditory word discrimination. In addition, the difference between the age groups was statistically significant, especially in 4-4.11-year-old children. The performance of the two gender groups was relatively same. The comparison of the P-AWDT scores between the typical children and the children with SSD demonstrated differences in the capabilities of auditory phonological discrimination in both initial and final positions. It supposed that the P-AWDT meets the appropriate validity and reliability criteria. The P-AWDT test can be utilized to measure the distinctive features of phonemes, the auditory discrimination of initial and final consonants and middle vowels of words in 4-8-year-old typical children and children with SSD. Copyright © 2018. Published by Elsevier B.V.

  15. Amphibious hearing in spotted seals (Phoca largha): underwater audiograms, aerial audiograms and critical ratio measurements.

    PubMed

    Sills, Jillian M; Southall, Brandon L; Reichmuth, Colleen

    2014-03-01

    Spotted seals (Phoca largha) inhabit Arctic regions that are facing both rapid climate change and increasing industrialization. While little is known about their sensory capabilities, available knowledge suggests that spotted seals and other ice seals use sound to obtain information from the surrounding environment. To quantitatively assess their auditory capabilities, the hearing of two young spotted seals was tested using a psychophysical paradigm. Absolute detection thresholds for tonal sounds were measured in air and under water over the frequency range of hearing, and critical ratios were determined using octave-band masking noise in both media. The behavioral audiograms show a range of best sensitivity spanning four octaves in air, from approximately 0.6 to 11 kHz. The range of sensitive hearing extends across seven octaves in water, with lowest thresholds between 0.3 and 56 kHz. Critical ratio measurements were similar in air and water and increased monotonically from 12 dB at 0.1 kHz to 30 dB at 25.6 kHz, indicating that the auditory systems of these seals are quite efficient at extracting signals from background noise. This study demonstrates that spotted seals possess sound reception capabilities different from those previously described for ice seals, and more similar to those reported for harbor seals (Phoca vitulina). The results are consistent with the amphibious lifestyle of these seals and their apparent reliance on sound. The hearing data reported herein are the first available for spotted seals and can inform best management practices for this vulnerable species in a changing Arctic.

  16. Coding principles of the canonical cortical microcircuit in the avian brain

    PubMed Central

    Calabrese, Ana; Woolley, Sarah M. N.

    2015-01-01

    Mammalian neocortex is characterized by a layered architecture and a common or “canonical” microcircuit governing information flow among layers. This microcircuit is thought to underlie the computations required for complex behavior. Despite the absence of a six-layered cortex, birds are capable of complex cognition and behavior. In addition, the avian auditory pallium is composed of adjacent information-processing regions with genetically identified neuron types and projections among regions comparable with those found in the neocortex. Here, we show that the avian auditory pallium exhibits the same information-processing principles that define the canonical cortical microcircuit, long thought to have evolved only in mammals. These results suggest that the canonical cortical microcircuit evolved in a common ancestor of mammals and birds and provide a physiological explanation for the evolution of neural processes that give rise to complex behavior in the absence of cortical lamination. PMID:25691736

  17. Development of a wireless system for auditory neuroscience.

    PubMed

    Lukes, A J; Lear, A T; Snider, R K

    2001-01-01

    In order to study how the auditory cortex extracts communication sounds in a realistic acoustic environment, a wireless system is being developed that will transmit acoustic as well as neural signals. The miniature transmitter will be capable of transmitting two acoustic signals with 37.5 KHz bandwidths (75 KHz sample rate) and 56 neural signals with bandwidths of 9.375 KHz (18.75 KHz sample rate). These signals will be time-division multiplexed into one high bandwidth signal with a 1.2 MHz sample rate. This high bandwidth signal will then be frequency modulated onto a 2.4 GHz carrier, which resides in the industrial, scientic, and medical (ISM) band that is designed for low-power short-range wireless applications. On the receiver side, the signal will be demodulated from the 2.4 GHz carrier and then digitized by an analog-to-digital (A/D) converter. The acoustic and neural signals will be digitally demultiplexed from the multiplexed signal into their respective channels. Oversampling (20 MHz) will allow the reconstruction of the multiplexing clock by a digital signal processor (DSP) that will perform frame and bit synchronization. A frame is a subset of the signal that contains all the channels and several channels tied high and low will signal the start of a frame. This technological development will bring two benefits to auditory neuroscience. It will allow simultaneous recording of many neurons that will permit studies of population codes. It will also allow neural functions to be determined in higher auditory areas by correlating neural and acoustic signals without apriori knowledge of the necessary stimuli.

  18. Maximum-likelihood estimation of channel-dependent trial-to-trial variability of auditory evoked brain responses in MEG

    PubMed Central

    2014-01-01

    Background We propose a mathematical model for multichannel assessment of the trial-to-trial variability of auditory evoked brain responses in magnetoencephalography (MEG). Methods Following the work of de Munck et al., our approach is based on the maximum likelihood estimation and involves an approximation of the spatio-temporal covariance of the contaminating background noise by means of the Kronecker product of its spatial and temporal covariance matrices. Extending the work of de Munck et al., where the trial-to-trial variability of the responses was considered identical to all channels, we evaluate it for each individual channel. Results Simulations with two equivalent current dipoles (ECDs) with different trial-to-trial variability, one seeded in each of the auditory cortices, were used to study the applicability of the proposed methodology on the sensor level and revealed spatial selectivity of the trial-to-trial estimates. In addition, we simulated a scenario with neighboring ECDs, to show limitations of the method. We also present an illustrative example of the application of this methodology to real MEG data taken from an auditory experimental paradigm, where we found hemispheric lateralization of the habituation effect to multiple stimulus presentation. Conclusions The proposed algorithm is capable of reconstructing lateralization effects of the trial-to-trial variability of evoked responses, i.e. when an ECD of only one hemisphere habituates, whereas the activity of the other hemisphere is not subject to habituation. Hence, it may be a useful tool in paradigms that assume lateralization effects, like, e.g., those involving language processing. PMID:24939398

  19. Institute for the Study of Human Capabilities

    DTIC Science & Technology

    1994-05-31

    Catholic Univ. James Reason Univ. of Manchester,UK Earl Hunt Univ. of Washington Barry Kantowitz Battelle Res. Center Colin Drury SUNY at Buffalo, NY Andrew...score variance accounted for were obtained in the elderly subjects by including various measures of auditory processing. 0 I 10I I I MAldling the...frequency could be discriminated. This analysis will now be extended to account for the detailed differences in thresholds3 observed for female (Kewley-Port

  20. Effects of location and timing of co-activated neurons in the auditory midbrain on cortical activity: implications for a new central auditory prosthesis

    NASA Astrophysics Data System (ADS)

    Straka, Małgorzata M.; McMahon, Melissa; Markovitz, Craig D.; Lim, Hubert H.

    2014-08-01

    Objective. An increasing number of deaf individuals are being implanted with central auditory prostheses, but their performance has generally been poorer than for cochlear implant users. The goal of this study is to investigate stimulation strategies for improving hearing performance with a new auditory midbrain implant (AMI). Previous studies have shown that repeated electrical stimulation of a single site in each isofrequency lamina of the central nucleus of the inferior colliculus (ICC) causes strong suppressive effects in elicited responses within the primary auditory cortex (A1). Here we investigate if improved cortical activity can be achieved by co-activating neurons with different timing and locations across an ICC lamina and if this cortical activity varies across A1. Approach. We electrically stimulated two sites at different locations across an isofrequency ICC lamina using varying delays in ketamine-anesthetized guinea pigs. We recorded and analyzed spike activity and local field potentials across different layers and locations of A1. Results. Co-activating two sites within an isofrequency lamina with short inter-pulse intervals (<5 ms) could elicit cortical activity that is enhanced beyond a linear summation of activity elicited by the individual sites. A significantly greater extent of normalized cortical activity was observed for stimulation of the rostral-lateral region of an ICC lamina compared to the caudal-medial region. We did not identify any location trends across A1, but the most cortical enhancement was observed in supragranular layers, suggesting further integration of the stimuli through the cortical layers. Significance. The topographic organization identified by this study provides further evidence for the presence of functional zones across an ICC lamina with locations consistent with those identified by previous studies. Clinically, these results suggest that co-activating different neural populations in the rostral-lateral ICC rather than the caudal-medial ICC using the AMI may improve or elicit different types of hearing capabilities.

  1. Voice responses to changes in pitch of voice or tone auditory feedback

    NASA Astrophysics Data System (ADS)

    Sivasankar, Mahalakshmi; Bauer, Jay J.; Babu, Tara; Larson, Charles R.

    2005-02-01

    The present study was undertaken to examine if a subject's voice F0 responded not only to perturbations in pitch of voice feedback but also to changes in pitch of a side tone presented congruent with voice feedback. Small magnitude brief duration perturbations in pitch of voice or tone auditory feedback were randomly introduced during sustained vowel phonations. Results demonstrated a higher rate and larger magnitude of voice F0 responses to changes in pitch of the voice compared with a triangular-shaped tone (experiment 1) or a pure tone (experiment 2). However, response latencies did not differ across voice or tone conditions. Data suggest that subjects responded to the change in F0 rather than harmonic frequencies of auditory feedback because voice F0 response prevalence, magnitude, or latency did not statistically differ across triangular-shaped tone or pure-tone feedback. Results indicate the audio-vocal system is sensitive to the change in pitch of a variety of sounds, which may represent a flexible system capable of adapting to changes in the subject's voice. However, lower prevalence and smaller responses to tone pitch-shifted signals suggest that the audio-vocal system may resist changes to the pitch of other environmental sounds when voice feedback is present. .

  2. Extending and Applying the EPIC Architecture for Human Cognition and Performance: Auditory and Spatial Components

    DTIC Science & Technology

    2013-03-20

    Wakefield of the University of Michigan as Co-PI. This extended activity produced a large number of products and accomplishments; however, this report...speech communication will be expanded to provide a robust modeling and prediction capability for tasks involving speech production and speech and non...preparations made to move to the newer Cocoa API instead of the previous Carbon API. In the following sections, an extended treatment will be

  3. Skylab

    NASA Image and Video Library

    1972-01-01

    This chart details Skylab's Metabolic Activity experiment (M171), a medical evaluation facility designed to measure astronauts' metabolic changes while on long-term space missions. The experiment obtained information on astronauts' physiological capabilities and limitations and provided data useful in the design of future spacecraft and work programs. Physiological responses to physical activity was deduced by analyzing inhaled and exhaled air, pulse rate, blood pressure, and other selected variables of the crew while they performed controlled amounts of physical work with a bicycle ergometer. The Marshall Space Flight Center had program responsibility for the development of Skylab hardware and experiments.

  4. Skylab

    NASA Image and Video Library

    1973-01-01

    This Skylab-3 onboard photograph shows Astronaut Allen Bean on the ergometer, breathing into the metabolic analyzer. Skylab's Metabolic Activity experiment (M171), a medical evaluation facility, was designed to measure astronauts' metabolic changes while on long-term space missions. The experiment obtained information on astronauts' physiological capabilities and limitations and provided data useful in the design of future spacecraft and work programs. Physiological responses to physical activity was deduced by analyzing inhaled and exhaled air, pulse rate, blood pressure, and other selected variables of the crew while they performed controlled amounts of physical work with a bicycle ergometer.

  5. Modeling of the EAST ICRF antenna with ICANT Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin Chengming; Zhao Yanping; Colas, L.

    2007-09-28

    A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.

  6. Modeling of the EAST ICRF antenna with ICANT Code

    NASA Astrophysics Data System (ADS)

    Qin, Chengming; Zhao, Yanping; Colas, L.; Heuraux, S.

    2007-09-01

    A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.

  7. Adaptation in the auditory midbrain of the barn owl (Tyto alba) induced by tonal double stimulation.

    PubMed

    Singheiser, Martin; Ferger, Roland; von Campenhausen, Mark; Wagner, Hermann

    2012-02-01

    During hunting, the barn owl typically listens to several successive sounds as generated, for example, by rustling mice. As auditory cells exhibit adaptive coding, the earlier stimuli may influence the detection of the later stimuli. This situation was mimicked with two double-stimulus paradigms, and adaptation was investigated in neurons of the barn owl's central nucleus of the inferior colliculus. Each double-stimulus paradigm consisted of a first or reference stimulus and a second stimulus (probe). In one paradigm (second level tuning), the probe level was varied, whereas in the other paradigm (inter-stimulus interval tuning), the stimulus interval between the first and second stimulus was changed systematically. Neurons were stimulated with monaural pure tones at the best frequency, while the response was recorded extracellularly. The responses to the probe were significantly reduced when the reference stimulus and probe had the same level and the inter-stimulus interval was short. This indicated response adaptation, which could be compensated for by an increase of the probe level of 5-7 dB over the reference level, if the latter was in the lower half of the dynamic range of a neuron's rate-level function. Recovery from adaptation could be best fitted with a double exponential showing a fast (1.25 ms) and a slow (800 ms) component. These results suggest that neurons in the auditory system show dynamic coding properties to tonal double stimulation that might be relevant for faithful upstream signal propagation. Furthermore, the overall stimulus level of the masker also seems to affect the recovery capabilities of auditory neurons. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  8. Auditory-motor learning influences auditory memory for music.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  9. Institute for the Study of Human Capabilities: Summary Descriptions of Research for the Period June 1, 1990 through May 31, 1991

    DTIC Science & Technology

    1991-07-23

    and H. Schroeder (eds.), Proceedings of the International Fechner Symposium. Am- sterdam: North Holland. VanZandt, T. and Townsend, J.R. (Submitted...Smith, L. B. (Forthcoming). A connectionist model of the development of the notion of sameness. Thirteenth Annual Conference of the Cognitive Science...1987). A detection theory method for the analysis of visual and auditory displays. Proceedings of the 31st Annual Meeting of the Human Factors

  10. Relational Associative Learning Induces Cross-Modal Plasticity in Early Visual Cortex

    PubMed Central

    Headley, Drew B.; Weinberger, Norman M.

    2015-01-01

    Neurobiological theories of memory posit that the neocortex is a storage site of declarative memories, a hallmark of which is the association of two arbitrary neutral stimuli. Early sensory cortices, once assumed uninvolved in memory storage, recently have been implicated in associations between neutral stimuli and reward or punishment. We asked whether links between neutral stimuli also could be formed in early visual or auditory cortices. Rats were presented with a tone paired with a light using a sensory preconditioning paradigm that enabled later evaluation of successful association. Subjects that acquired this association developed enhanced sound evoked potentials in their primary and secondary visual cortices. Laminar recordings localized this potential to cortical Layers 5 and 6. A similar pattern of activation was elicited by microstimulation of primary auditory cortex in the same subjects, consistent with a cortico-cortical substrate of association. Thus, early sensory cortex has the capability to form neutral stimulus associations. This plasticity may constitute a declarative memory trace between sensory cortices. PMID:24275832

  11. Confidence range estimate of extended source imagery acquisition algorithms via computer simulations. [in optical communication systems

    NASA Technical Reports Server (NTRS)

    Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret

    1992-01-01

    Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.

  12. Attention Cueing and Activity Equally Reduce False Alarm Rate in Visual-Auditory Associative Learning through Improving Memory.

    PubMed

    Nikouei Mahani, Mohammad-Ali; Haghgoo, Hojjat Allah; Azizi, Solmaz; Nili Ahmadabadi, Majid

    2016-01-01

    In our daily life, we continually exploit already learned multisensory associations and form new ones when facing novel situations. Improving our associative learning results in higher cognitive capabilities. We experimentally and computationally studied the learning performance of healthy subjects in a visual-auditory sensory associative learning task across active learning, attention cueing learning, and passive learning modes. According to our results, the learning mode had no significant effect on learning association of congruent pairs. In addition, subjects' performance in learning congruent samples was not correlated with their vigilance score. Nevertheless, vigilance score was significantly correlated with the learning performance of the non-congruent pairs. Moreover, in the last block of the passive learning mode, subjects significantly made more mistakes in taking non-congruent pairs as associated and consciously reported lower confidence. These results indicate that attention and activity equally enhanced visual-auditory associative learning for non-congruent pairs, while false alarm rate in the passive learning mode did not decrease after the second block. We investigated the cause of higher false alarm rate in the passive learning mode by using a computational model, composed of a reinforcement learning module and a memory-decay module. The results suggest that the higher rate of memory decay is the source of making more mistakes and reporting lower confidence in non-congruent pairs in the passive learning mode.

  13. Direct Intracochlear Acoustic Stimulation Using a PZT Microactuator.

    PubMed

    Luo, Chuan; Omelchenko, Irina; Manson, Robert; Robbins, Carol; Oesterle, Elizabeth C; Cao, Guo Zhong; Shen, I Y; Hume, Clifford R

    2015-12-01

    Combined electric and acoustic stimulation has proven to be an effective strategy to improve hearing in some cochlear implant users. We describe an acoustic microactuator to directly deliver stimuli to the perilymph in the scala tympani. The 800 µm by 800 µm actuator has a silicon diaphragm driven by a piezoelectric thin film (e.g., lead-zirconium-titanium oxide or PZT). This device could also be used as a component of a bimodal acoustic-electric electrode array. In the current study, we established a guinea pig model to test the actuator for its ability to deliver auditory signals to the cochlea in vivo. The actuator was placed through the round window of the cochlea. Auditory brainstem response (ABR) thresholds, peak latencies, and amplitude growth were calculated for an ear canal speaker versus the intracochlear actuator for tone burst stimuli at 4, 8, 16, and 24 kHz. An ABR was obtained after removal of the probe to assess loss of hearing related to the procedure. In some animals, the temporal bone was harvested for histologic analysis of cochlear damage. We show that the device is capable of stimulating ABRs in vivo with latencies and growth functions comparable to stimulation in the ear canal. Further experiments will be necessary to evaluate the efficiency and safety of this modality in long-term auditory stimulation and its ability to be integrated with conventional cochlear implant arrays. © The Author(s) 2015.

  14. Feasibility of and Design Parameters for a Computer-Based Attitudinal Research Information System

    DTIC Science & Technology

    1975-08-01

    Auditory Displays Auditory Evoked Potentials Auditory Feedback Auditory Hallucinations Auditory Localization Auditory Maski ng Auditory Neurons...surprising to hear these prob- lems e:qpressed once again and in the same old refrain. The Navy attitude surveyors were frustrated when they...Audiolcgy Audiometers Aud iometry Audiotapes Audiovisual Communications Media Audiovisual Instruction Auditory Cortex Auditory

  15. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    ERIC Educational Resources Information Center

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  16. Spectral-Temporal Modulated Ripple Discrimination by Children With Cochlear Implants.

    PubMed

    Landsberger, David M; Padilla, Monica; Martinez, Amy S; Eisenberg, Laurie S

    A postlingually implanted adult typically develops hearing with an intact auditory system, followed by periods of deafness (or near deafness) and adaptation to the implant. For an early implanted child whose brain is highly plastic, the auditory system matures with consistent input from a cochlear implant. It is likely that the auditory system of early implanted cochlear implant users is fundamentally different than postlingually implanted adults. The purpose of this study is to compare the basic psychophysical capabilities and limitations of these two populations on a spectral resolution task to determine potential effects of early deprivation and plasticity. Performance on a spectral resolution task (Spectral-temporally Modulated Ripple Test [SMRT]) was measured for 20 bilaterally implanted, prelingually deafened children (between 5 and 13 years of age) and 20 hearing children within the same age range. Additionally, 15 bilaterally implanted, postlingually deafened adults, and 10 hearing adults were tested on the same task. Cochlear implant users (adults and children) were tested bilaterally, and with each ear alone. Hearing listeners (adults and children) were tested with the unprocessed SMRT and with a vocoded version that simulates an 8-channel cochlear implant. For children with normal hearing, a positive correlation was found between age and SMRT score for both the unprocessed and vocoded versions. Older hearing children performed similarly to hearing adults in both the unprocessed and vocoded test conditions. However, for children with cochlear implants, no significant relationship was found between SMRT score and chronological age, age at implantation, or years of implant experience. Performance by children with cochlear implants was poorer than performance by cochlear implanted adults. It was also found that children implanted sequentially tended to have better scores with the first implant compared with the second implant. This difference was not observed for adults. An additional finding was that SMRT score was negatively correlated with age for adults with implants. Results from this study suggest that basic psychophysical capabilities of early implanted children and postlingually implanted adults differ when assessed in the sound field using their personal implant processors. Because spectral resolution does not improve with age for early implanted children, it seems likely that the sparse representation of the signal provided by a cochlear implant limits spectral resolution development. These results are supported by the finding that postlingually implanted adults, whose auditory systems matured before the onset of hearing loss, perform significantly better than early implanted children on the spectral resolution test.

  17. An interactive driving simulation for driver control and decision-making research

    NASA Technical Reports Server (NTRS)

    Allen, R. W.; Hogge, J. R.; Schwartz, S. H.

    1975-01-01

    Display techniques and equations of motion for a relatively simple fixed base car simulation are described. The vehicle dynamics include simplified lateral (steering) and longitudinal (speed) degrees of freedom. Several simulator tasks are described which require a combination of operator control and decision making, including response to wind gust inputs, curved roads, traffic signal lights, and obstacles. Logic circuits are used to detect speeding, running red lights, and crashes. A variety of visual and auditory cues are used to give the driver appropriate performance feedback. The simulated equations of motion are reviewed and the technique for generating the line drawing CRT roadway display is discussed. On-line measurement capabilities and experimenter control features are presented, along with previous and current research results demonstrating simulation capabilities and applications.

  18. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    PubMed

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre-speech modulation is not directly related to limited auditory-motor adaptation; and in AWS, DAF paradoxically tends to normalize their otherwise limited pre-speech auditory modulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Hearing in cetaceans: from natural history to experimental biology.

    PubMed

    Mooney, T Aran; Yamato, Maya; Branstetter, Brian K

    2012-01-01

    Sound is a primary sensory cue for most marine mammals, and this is especially true for cetaceans. To passively and actively acquire information about their environment, cetaceans have some of the most derived ears of all mammals, capable of sophisticated, sensitive hearing and auditory processing. These capabilities have developed for survival in an underwater world where sound travels five times faster than in air, and where light is quickly attenuated and often limited at depth, at night, and in murky waters. Cetacean auditory evolution has capitalized on the ubiquity of sound cues and the efficiency of underwater acoustic communication. The sense of hearing is central to cetacean sensory ecology, enabling vital behaviours such as locating prey, detecting predators, identifying conspecifics, and navigating. Increasing levels of anthropogenic ocean noise appears to influence many of these activities. Here, we describe the historical progress of investigations on cetacean hearing, with a particular focus on odontocetes and recent advancements. While this broad topic has been studied for several centuries, new technologies in the past two decades have been leveraged to improve our understanding of a wide range of taxa, including some of the most elusive species. This chapter addresses topics including how sounds are received, what sounds are detected, hearing mechanisms for complex acoustic scenes, recent anatomical and physiological studies, the potential impacts of noise, and mysticete hearing. We conclude by identifying emerging research topics and areas which require greater focus. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Metabolic Activity - Skylab Experiment M171

    NASA Technical Reports Server (NTRS)

    1972-01-01

    This chart details Skylab's Metabolic Activity experiment (M171), a medical evaluation facility designed to measure astronauts' metabolic changes while on long-term space missions. The experiment obtained information on astronauts' physiological capabilities and limitations and provided data useful in the design of future spacecraft and work programs. Physiological responses to physical activity was deduced by analyzing inhaled and exhaled air, pulse rate, blood pressure, and other selected variables of the crew while they performed controlled amounts of physical work with a bicycle ergometer. The Marshall Space Flight Center had program responsibility for the development of Skylab hardware and experiments.

  1. Neural network decoder for quantum error correcting codes

    NASA Astrophysics Data System (ADS)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  2. Skylab-3 Mission Onboard Photograph - Astronaut Bean on Ergometer

    NASA Technical Reports Server (NTRS)

    1973-01-01

    This Skylab-3 onboard photograph shows Astronaut Allen Bean on the ergometer, breathing into the metabolic analyzer. Skylab's Metabolic Activity experiment (M171), a medical evaluation facility, was designed to measure astronauts' metabolic changes while on long-term space missions. The experiment obtained information on astronauts' physiological capabilities and limitations and provided data useful in the design of future spacecraft and work programs. Physiological responses to physical activity was deduced by analyzing inhaled and exhaled air, pulse rate, blood pressure, and other selected variables of the crew while they performed controlled amounts of physical work with a bicycle ergometer.

  3. Procedures for central auditory processing screening in schoolchildren.

    PubMed

    Carvalho, Nádia Giulian de; Ubiali, Thalita; Amaral, Maria Isabel Ramos do; Santos, Maria Francisca Colella

    2018-03-22

    Central auditory processing screening in schoolchildren has led to debates in literature, both regarding the protocol to be used and the importance of actions aimed at prevention and promotion of auditory health. Defining effective screening procedures for central auditory processing is a challenge in Audiology. This study aimed to analyze the scientific research on central auditory processing screening and discuss the effectiveness of the procedures utilized. A search was performed in the SciELO and PUBMed databases by two researchers. The descriptors used in Portuguese and English were: auditory processing, screening, hearing, auditory perception, children, auditory tests and their respective terms in Portuguese. original articles involving schoolchildren, auditory screening of central auditory skills and articles in Portuguese or English. studies with adult and/or neonatal populations, peripheral auditory screening only, and duplicate articles. After applying the described criteria, 11 articles were included. At the international level, central auditory processing screening methods used were: screening test for auditory processing disorder and its revised version, screening test for auditory processing, scale of auditory behaviors, children's auditory performance scale and Feather Squadron. In the Brazilian scenario, the procedures used were the simplified auditory processing assessment and Zaidan's battery of tests. At the international level, the screening test for auditory processing and Feather Squadron batteries stand out as the most comprehensive evaluation of hearing skills. At the national level, there is a paucity of studies that use methods evaluating more than four skills, and are normalized by age group. The use of simplified auditory processing assessment and questionnaires can be complementary in the search for an easy access and low-cost alternative in the auditory screening of Brazilian schoolchildren. Interactive tools should be proposed, that allow the selection of as many hearing skills as possible, validated by comparison with the battery of tests used in the diagnosis. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  4. Outline for Remediation of Problem Areas for Children with Learning Disabilities. Revised. = Bosquejo para la Correccion de Areas Problematicas para Ninos con Impedimientos del Aprendizaje.

    ERIC Educational Resources Information Center

    Bornstein, Joan L.

    The booklet outlines ways to help children with learning disabilities in specific subject areas. Characteristic behavior and remedial exercises are listed for seven areas of auditory problems: auditory reception, auditory association, auditory discrimination, auditory figure ground, auditory closure and sound blending, auditory memory, and grammar…

  5. Experience and information loss in auditory and visual memory.

    PubMed

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  6. Acquired hearing loss and brain plasticity.

    PubMed

    Eggermont, Jos J

    2017-01-01

    Acquired hearing loss results in an imbalance of the cochlear output across frequency. Central auditory system homeostatic processes responding to this result in frequency specific gain changes consequent to the emerging imbalance between excitation and inhibition. Several consequences thereof are increased spontaneous firing rates, increased neural synchrony, and (in adults) potentially restricted to the auditory thalamus and cortex a reorganization of tonotopic areas. It does not seem to matter much whether the hearing loss is acquired neonatally or in adulthood. In humans, no clear evidence of tonotopic map changes with hearing loss has so far been provided, but frequency specific gain changes are well documented. Unilateral hearing loss in addition makes brain activity across hemispheres more symmetrical and more synchronous. Molecular studies indicate that in the brainstem, after 2-5 days post trauma, the glutamatergic activity is reduced, whereas glycinergic and GABAergic activity is largely unchanged. At 2 months post trauma, excitatory activity remains decreased but the inhibitory one is significantly increased. In contrast protein assays related to inhibitory transmission are all decreased or unchanged in the brainstem, midbrain and auditory cortex. Comparison of neurophysiological data with the molecular findings during a time-line of changes following noise trauma suggests that increases in spontaneous firing rates are related to decreases in inhibition, and not to increases in excitation. Because noise-induced hearing loss in cats resulted in a loss of cortical temporal processing capabilities, this may also underlie speech understanding in humans. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Direct Intracochlear Acoustic Stimulation Using a PZT Microactuator

    PubMed Central

    Luo, Chuan; Omelchenko, Irina; Manson, Robert; Robbins, Carol; Oesterle, Elizabeth C.; Cao, Guo Zhong; Hume, Clifford R.

    2015-01-01

    Combined electric and acoustic stimulation has proven to be an effective strategy to improve hearing in some cochlear implant users. We describe an acoustic microactuator to directly deliver stimuli to the perilymph in the scala tympani. The 800 µm by 800 µm actuator has a silicon diaphragm driven by a piezoelectric thin film (e.g., lead-zirconium-titanium oxide or PZT). This device could also be used as a component of a bimodal acoustic-electric electrode array. In the current study, we established a guinea pig model to test the actuator for its ability to deliver auditory signals to the cochlea in vivo. The actuator was placed through the round window of the cochlea. Auditory brainstem response (ABR) thresholds, peak latencies, and amplitude growth were calculated for an ear canal speaker versus the intracochlear actuator for tone burst stimuli at 4, 8, 16, and 24 kHz. An ABR was obtained after removal of the probe to assess loss of hearing related to the procedure. In some animals, the temporal bone was harvested for histologic analysis of cochlear damage. We show that the device is capable of stimulating ABRs in vivo with latencies and growth functions comparable to stimulation in the ear canal. Further experiments will be necessary to evaluate the efficiency and safety of this modality in long-term auditory stimulation and its ability to be integrated with conventional cochlear implant arrays. PMID:26631107

  8. Auditory midbrain implant: a review.

    PubMed

    Lim, Hubert H; Lenarz, Minoo; Lenarz, Thomas

    2009-09-01

    The auditory midbrain implant (AMI) is a new hearing prosthesis designed for stimulation of the inferior colliculus in deaf patients who cannot sufficiently benefit from cochlear implants. The authors have begun clinical trials in which five patients have been implanted with a single shank AMI array (20 electrodes). The goal of this review is to summarize the development and research that has led to the translation of the AMI from a concept into the first patients. This study presents the rationale and design concept for the AMI as well a summary of the animal safety and feasibility studies that were required for clinical approval. The authors also present the initial surgical, psychophysical, and speech results from the first three implanted patients. Overall, the results have been encouraging in terms of the safety and functionality of the implant. All patients obtain improvements in hearing capabilities on a daily basis. However, performance varies dramatically across patients depending on the implant location within the midbrain with the best performer still not able to achieve open set speech perception without lip-reading cues. Stimulation of the auditory midbrain provides a wide range of level, spectral, and temporal cues, all of which are important for speech understanding, but they do not appear to sufficiently fuse together to enable open set speech perception with the currently used stimulation strategies. Finally, several issues and hypotheses for why current patients obtain limited speech perception along with several feasible solutions for improving AMI implementation are presented.

  9. Light-weight analyzer for odor recognition

    DOEpatents

    Vass, Arpad A; Wise, Marcus B

    2014-05-20

    The invention provides a light weight analyzer, e.g., detector, capable of locating clandestine graves. The detector utilizes the very specific and unique chemicals identified in the database of human decompositional odor. This detector, based on specific chemical compounds found relevant to human decomposition, is the next step forward in clandestine grave detection and will take the guess-work out of current methods using canines and ground-penetrating radar, which have historically been unreliable. The detector is self contained, portable and built for field use. Both visual and auditory cues are provided to the operator.

  10. Auditory Learning. Dimensions in Early Learning Series.

    ERIC Educational Resources Information Center

    Zigmond, Naomi K.; Cicci, Regina

    The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…

  11. Auditory and motor imagery modulate learning in music performance

    PubMed Central

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of auditory interference. Motor imagery aided pitch accuracy overall when interference conditions were manipulated at encoding (Experiment 1) but not at retrieval (Experiment 2). Thus, skilled performers' imagery abilities had distinct influences on encoding and retrieval of musical sequences. PMID:23847495

  12. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    PubMed

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    PubMed

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  14. State of the art in perceptual design of hearing aids

    NASA Astrophysics Data System (ADS)

    Edwards, Brent W.; van Tasell, Dianne J.

    2002-05-01

    Hearing aid capabilities have increased dramatically over the past six years, in large part due to the development of small, low-power digital signal processing chips suitable for hearing aid applications. As hearing aid signal processing capabilities increase, there will be new opportunities to apply perceptually based knowledge to technological development. Most hearing loss compensation techniques in today's hearing aids are based on simple estimates of audibility and loudness. As our understanding of the psychoacoustical and physiological characteristics of sensorineural hearing loss improves, the result should be improved design of hearing aids and fitting methods. The state of the art in hearing aids will be reviewed, including form factors, user requirements, and technology that improves speech intelligibility, sound quality, and functionality. General areas of auditory perception that remain unaddressed by current hearing aid technology will be discussed.

  15. Ontogenetic Development of Weberian Ossicles and Hearing Abilities in the African Bullhead Catfish

    PubMed Central

    Lechner, Walter; Heiss, Egon; Schwaha, Thomas; Glösmann, Martin; Ladich, Friedrich

    2011-01-01

    Background The Weberian apparatus of otophysine fishes facilitates sound transmission from the swimbladder to the inner ear to increase hearing sensitivity. It has been of great interest to biologists since the 19th century. No studies, however, are available on the development of the Weberian ossicles and its effect on the development of hearing in catfishes. Methodology/Principal Findings We investigated the development of the Weberian apparatus and auditory sensitivity in the catfish Lophiobagrus cyclurus. Specimens from 11.3 mm to 85.5 mm in standard length were studied. Morphology was assessed using sectioning, histology, and X-ray computed tomography, along with 3D reconstruction. Hearing thresholds were measured utilizing the auditory evoked potentials recording technique. Weberian ossicles and interossicular ligaments were fully developed in all stages investigated except in the smallest size group. In the smallest catfish, the intercalarium and the interossicular ligaments were still missing and the tripus was not yet fully developed. Smallest juveniles revealed lowest auditory sensitivity and were unable to detect frequencies higher than 2 or 3 kHz; sensitivity increased in larger specimens by up to 40 dB, and frequency detection up to 6 kHz. In the size groups capable of perceiving frequencies up to 6 kHz, larger individuals had better hearing abilities at low frequencies (0.05–2 kHz), whereas smaller individuals showed better hearing at the highest frequencies (4–6 kHz). Conclusions/Significance Our data indicate that the ability of otophysine fish to detect sounds at low levels and high frequencies largely depends on the development of the Weberian apparatus. A significant increase in auditory sensitivity was observed as soon as all Weberian ossicles and interossicular ligaments are present and the chain for transmitting sounds from the swimbladder to the inner ear is complete. This contrasts with findings in another otophysine, the zebrafish, where no threshold changes have been observed. PMID:21533262

  16. Noise exposure of immature rats can induce different age-dependent extra-auditory alterations that can be partially restored by rearing animals in an enriched environment.

    PubMed

    Molina, S J; Capani, F; Guelman, L R

    2016-04-01

    It has been previously shown that different extra-auditory alterations can be induced in animals exposed to noise at 15 days. However, data regarding exposure of younger animals, that do not have a functional auditory system, have not been obtained yet. Besides, the possibility to find a helpful strategy to restore these changes has not been explored so far. Therefore, the aims of the present work were to test age-related differences in diverse hippocampal-dependent behavioral measurements that might be affected in noise-exposed rats, as well as to evaluate the effectiveness of a potential neuroprotective strategy, the enriched environment (EE), on noise-induced behavioral alterations. Male Wistar rats of 7 and 15 days were exposed to moderate levels of noise for two hours. At weaning, animals were separated and reared either in standard or in EE cages for one week. At 28 days of age, different hippocampal-dependent behavioral assessments were performed. Results show that rats exposed to noise at 7 and 15 days were differentially affected. Moreover, EE was effective in restoring all altered variables when animals were exposed at 7 days, while a few were restored in rats exposed at 15 days. The present findings suggest that noise exposure was capable to trigger significant hippocampal-related behavioral alterations that were differentially affected, depending on the age of exposure. In addition, it could be proposed that hearing structures did not seem to be necessarily involved in the generation of noise-induced hippocampal-related behaviors, as they were observed even in animals with an immature auditory pathway. Finally, it could be hypothesized that the differential restoration achieved by EE rearing might also depend on the degree of maturation at the time of exposure and the variable evaluated, being younger animals more susceptible to environmental manipulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Bigger Brains or Bigger Nuclei? Regulating the Size of Auditory Structures in Birds

    PubMed Central

    Kubke, M. Fabiana; Massoglia, Dino P.; Carr, Catherine E.

    2012-01-01

    Increases in the size of the neuronal structures that mediate specific behaviors are believed to be related to enhanced computational performance. It is not clear, however, what developmental and evolutionary mechanisms mediate these changes, nor whether an increase in the size of a given neuronal population is a general mechanism to achieve enhanced computational ability. We addressed the issue of size by analyzing the variation in the relative number of cells of auditory structures in auditory specialists and generalists. We show that bird species with different auditory specializations exhibit variation in the relative size of their hindbrain auditory nuclei. In the barn owl, an auditory specialist, the hind-brain auditory nuclei involved in the computation of sound location show hyperplasia. This hyperplasia was also found in songbirds, but not in non-auditory specialists. The hyperplasia of auditory nuclei was also not seen in birds with large body weight suggesting that the total number of cells is selected for in auditory specialists. In barn owls, differences observed in the relative size of the auditory nuclei might be attributed to modifications in neurogenesis and cell death. Thus, hyperplasia of circuits used for auditory computation accompanies auditory specialization in different orders of birds. PMID:14726625

  18. DETECTION AND IDENTIFICATION OF SPEECH SOUNDS USING CORTICAL ACTIVITY PATTERNS

    PubMed Central

    Centanni, T.M.; Sloan, A.M.; Reed, A.C.; Engineer, C.T.; Rennaker, R.; Kilgard, M.P.

    2014-01-01

    We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/sec), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech processing disorders. PMID:24286757

  19. Multimedia-assisted breathwalk-aware system.

    PubMed

    Yu, Meng-Chieh; Wu, Huan; Lee, Ming-Sui; Hung, Yi-Ping

    2012-12-01

    Breathwalk is a science of combining specific patterns of footsteps synchronized with the breathing. In this study, we developed a multimedia-assisted Breathwalk-aware system which detects user's walking and breathing conditions and provides appropriate multimedia guidance on the smartphone. Through the mobile device, the system enhances user's awareness of walking and breathing behaviors. As an example application in slow technology, the system could help meditator beginners learn "walking meditation," a type of meditation which aims to be as slow as possible in taking pace, to synchronize footstep with breathing, and to land every footstep with toes first. In the pilot study, we developed a walking-aware system and evaluated whether multimedia-assisted mechanism is capable of enhancing beginner's walking awareness while walking meditation. Experimental results show that it could effectively assist beginners in slowing down the walking speed and decreasing incorrect footsteps. In the second experiment, we evaluated the Breathwalk-aware system to find a better feedback mechanism for learning the techniques of Breathwalk while walking meditation. The experimental results show that the visual-auditory mechanism is a better multimedia-assisted mechanism while walking meditation than visual mechanism and auditory mechanism.

  20. Post-processing of auditory steady-state responses to correct spectral leakage.

    PubMed

    Felix, Leonardo Bonato; de Sá, Antonio Mauricio Ferreira Leite Miranda; Mendes, Eduardo Mazoni Andrade Marçal; Moraes, Márcio Flávio Dutra

    2009-06-30

    Auditory steady-state responses (ASSRs) are electrical manifestations of brain due to high rate sound stimulation. These evoked responses can be used to assess the hearing capabilities of a subject in an objective, automatic fashion. Usually, the detection protocol is accomplished by frequency-domain techniques, such as magnitude-squared coherence, whose estimation is based on the fast Fourier transform (FFT) of several data segments. In practice, the FFT-based spectrum may spread out the energy of a given frequency to its side bins and this escape of energy in the spectrum is called spectral leakage. The distortion of the spectrum due to leakage may severely compromise statistical significance of objective detection. This work presents an offline, a posteriori method for spectral leakage minimization in the frequency-domain analysis of ASSRs using coherent sampling criterion and interpolation in time. The technique was applied to the local field potentials of 10 Wistar rats and the results, together with those from simulated data, indicate that a leakage-free analysis of ASSRs is possible for any dataset if the methods showed in this paper were followed.

  1. The effect of 1/f fluctuation in inter-stimulus intervals on auditory evoked mismatch field.

    PubMed

    Harada, Nobuyoshi; Masuda, Tadashi; Endo, Hiroshi; Nakamura, Yukihiro; Takeda, Tsunehiro; Tonoike, Mitsuo

    2005-05-13

    This study focused on the effect of regularity of environmental stimuli on the informational order extracting function of human brain. The regularity of environmental stimuli can be described with the exponent n of the fluctuation 1/f(n). We studied the effect of the exponent of the fluctuation in the inter-stimulus interval (ISI) on the elicitation of auditory evoked mismatch fields (MMF) with two sounds with alternating frequency. ISI times were given by three types of fluctuation, 1/f(0), 1/f(1), 1/f(2), and with a fixed interval (1/f(infinity)). The root mean square (RMS) value of the MMF increased significantly (F(3/9)=4.95, p=0.027) with increases in the exponent of the fluctuation. Increments in the regularity of the fluctuation provoked enhancement of the MMF, which reflected the production of a memory trace, based on the anticipation of the stimulus timing. The gradient of the curve, indicating the ratio of increments between the MMF and the exponent of fluctuation, can express a subject's capability to extract regularity from fluctuating stimuli.

  2. Design evaluation in technology assessment: Illustrated by auditory impedance and sound distribution problems

    NASA Technical Reports Server (NTRS)

    Halldane, J. F.

    1972-01-01

    Technology is considered as a culture for changing a physical world and technology assessment questions the inherent cultural capability to modify power and material in support of living organisms. A comprehensive goal-parameter-synthesis-criterion specification is presented as a basis for a rational assessment of technology. The thesis queries the purpose of the assessed problems, the factors considered, the relationships between factors, and the values assigned those factors to accomplish the appropriate purpose. Stationary and sequential evaluation of enviro-organismic systems are delegated to the responsible personalities involved in design; from promoter/designer through contractor to occupant. Discussion includes design goals derived from organismic factors, definitions of human responses which establish viable criteria and relevant correlation models, linking stimulus parameters, and parallel problem-discipline centered design organization. A consistent concept of impedance, as a degradation in the performance of a specified parameter, is introduced to overcome the arbitrary inoperative connotations of terms like noise, discomfort, and glare. Applications of the evaluative specification are illustrated through design problems related to auditory impedance and sound distribution.

  3. INTERPOL survey of the use of speaker identification by law enforcement agencies.

    PubMed

    Morrison, Geoffrey Stewart; Sahito, Farhan Hyder; Jardine, Gaëlle; Djokic, Djordje; Clavet, Sophie; Berghs, Sabine; Goemans Dorny, Caroline

    2016-06-01

    A survey was conducted of the use of speaker identification by law enforcement agencies around the world. A questionnaire was circulated to law enforcement agencies in the 190 member countries of INTERPOL. 91 responses were received from 69 countries. 44 respondents reported that they had speaker identification capabilities in house or via external laboratories. Half of these came from Europe. 28 respondents reported that they had databases of audio recordings of speakers. The clearest pattern in the responses was that of diversity. A variety of different approaches to speaker identification were used: The human-supervised-automatic approach was the most popular in North America, the auditory-acoustic-phonetic approach was the most popular in Europe, and the spectrographic/auditory-spectrographic approach was the most popular in Africa, Asia, the Middle East, and South and Central America. Globally, and in Europe, the most popular framework for reporting conclusions was identification/exclusion/inconclusive. In Europe, the second most popular framework was the use of verbal likelihood ratio scales. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Impairments of Multisensory Integration and Cross-Sensory Learning as Pathways to Dyslexia

    PubMed Central

    Hahn, Noemi; Foxe, John J.; Molholm, Sophie

    2014-01-01

    Two sensory systems are intrinsic to learning to read. Written words enter the brain through the visual system and associated sounds through the auditory system. The task before the beginning reader is quite basic. She must learn correspondences between orthographic tokens and phonemic utterances, and she must do this to the point that there is seamless automatic ‘connection’ between these sensorially distinct units of language. It is self-evident then that learning to read requires formation of cross-sensory associations to the point that deeply encoded multisensory representations are attained. While the majority of individuals manage this task to a high degree of expertise, some struggle to attain even rudimentary capabilities. Why do dyslexic individuals, who learn well in myriad other domains, fail at this particular task? Here, we examine the literature as it pertains to multisensory processing in dyslexia. We find substantial support for multisensory deficits in dyslexia, and make the case that to fully understand its neurological basis, it will be necessary to thoroughly probe the integrity of auditory-visual integration mechanisms. PMID:25265514

  5. Auditory priming improves neural synchronization in auditory-motor entrainment.

    PubMed

    Crasta, Jewel E; Thaut, Michael H; Anderson, Charles W; Davies, Patricia L; Gavin, William J

    2018-05-22

    Neurophysiological research has shown that auditory and motor systems interact during movement to rhythmic auditory stimuli through a process called entrainment. This study explores the neural oscillations underlying auditory-motor entrainment using electroencephalography. Forty young adults were randomly assigned to one of two control conditions, an auditory-only condition or a motor-only condition, prior to a rhythmic auditory-motor synchronization condition (referred to as combined condition). Participants assigned to the auditory-only condition auditory-first group) listened to 400 trials of auditory stimuli presented every 800 ms, while those in the motor-only condition (motor-first group) were asked to tap rhythmically every 800 ms without any external stimuli. Following their control condition, all participants completed an auditory-motor combined condition that required tapping along with auditory stimuli every 800 ms. As expected, the neural processes for the combined condition for each group were different compared to their respective control condition. Time-frequency analysis of total power at an electrode site on the left central scalp (C3) indicated that the neural oscillations elicited by auditory stimuli, especially in the beta and gamma range, drove the auditory-motor entrainment. For the combined condition, the auditory-first group had significantly lower evoked power for a region of interest representing sensorimotor processing (4-20 Hz) and less total power in a region associated with anticipation and predictive timing (13-16 Hz) than the motor-first group. Thus, the auditory-only condition served as a priming facilitator of the neural processes in the combined condition, more so than the motor-only condition. Results suggest that even brief periods of rhythmic training of the auditory system leads to neural efficiency facilitating the motor system during the process of entrainment. These findings have implications for interventions using rhythmic auditory stimulation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Effects of Instantaneous Multiband Dynamic Compression on Speech Intelligibility

    NASA Astrophysics Data System (ADS)

    Herzke, Tobias; Hohmann, Volker

    2005-12-01

    The recruitment phenomenon, that is, the reduced dynamic range between threshold and uncomfortable level, is attributed to the loss of instantaneous dynamic compression on the basilar membrane. Despite this, hearing aids commonly use slow-acting dynamic compression for its compensation, because this was found to be the most successful strategy in terms of speech quality and intelligibility rehabilitation. Former attempts to use fast-acting compression gave ambiguous results, raising the question as to whether auditory-based recruitment compensation by instantaneous compression is in principle applicable in hearing aids. This study thus investigates instantaneous multiband dynamic compression based on an auditory filterbank. Instantaneous envelope compression is performed in each frequency band of a gammatone filterbank, which provides a combination of time and frequency resolution comparable to the normal healthy cochlea. The gain characteristics used for dynamic compression are deduced from categorical loudness scaling. In speech intelligibility tests, the instantaneous dynamic compression scheme was compared against a linear amplification scheme, which used the same filterbank for frequency analysis, but employed constant gain factors that restored the sound level for medium perceived loudness in each frequency band. In subjective comparisons, five of nine subjects preferred the linear amplification scheme and would not accept the instantaneous dynamic compression in hearing aids. Four of nine subjects did not perceive any quality differences. A sentence intelligibility test in noise (Oldenburg sentence test) showed little to no negative effects of the instantaneous dynamic compression, compared to linear amplification. A word intelligibility test in quiet (one-syllable rhyme test) showed that the subjects benefit from the larger amplification at low levels provided by instantaneous dynamic compression. Further analysis showed that the increase in intelligibility resulting from a gain provided by instantaneous compression is as high as from a gain provided by linear amplification. No negative effects of the distortions introduced by the instantaneous compression scheme in terms of speech recognition are observed.

  7. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  8. The what, where and how of auditory-object perception.

    PubMed

    Bizley, Jennifer K; Cohen, Yale E

    2013-10-01

    The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.

  9. The what, where and how of auditory-object perception

    PubMed Central

    Bizley, Jennifer K.; Cohen, Yale E.

    2014-01-01

    The fundamental perceptual unit in hearing is the ‘auditory object’. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood. PMID:24052177

  10. Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets.

    PubMed

    Meredith, M Alex; Allman, Brian L

    2015-03-01

    The recent findings in several species that the primary auditory cortex processes non-auditory information have largely overlooked the possibility of somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior auditory field and primary auditory cortex) for tactile responsivity. Multiple single-unit recordings from anesthetised ferret cortex yielded histologically verified neurons (n = 311) tested with electronically controlled auditory, visual and tactile stimuli, and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in the core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. Differential responses of primary auditory cortex in autistic spectrum disorder with auditory hypersensitivity.

    PubMed

    Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Goto, Tetsu; Sanefuji, Wakako; Yamamoto, Tomoka; Sakai, Saeko; Uchida, Hiroyuki; Hirata, Masayuki; Mohri, Ikuko; Yorifuji, Shiro; Taniike, Masako

    2012-01-25

    The aim of this study was to investigate the differential responses of the primary auditory cortex to auditory stimuli in autistic spectrum disorder with or without auditory hypersensitivity. Auditory-evoked field values were obtained from 18 boys (nine with and nine without auditory hypersensitivity) with autistic spectrum disorder and 12 age-matched controls. Autistic disorder with hypersensitivity showed significantly more delayed M50/M100 peak latencies than autistic disorder without hypersensitivity or the control. M50 dipole moments in the hypersensitivity group were larger than those in the other two groups [corrected]. M50/M100 peak latencies were correlated with the severity of auditory hypersensitivity; furthermore, severe hypersensitivity induced more behavioral problems. This study indicates auditory hypersensitivity in autistic spectrum disorder as a characteristic response of the primary auditory cortex, possibly resulting from neurological immaturity or functional abnormalities in it. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.

  12. Selective impairment of auditory selective attention under concurrent cognitive load.

    PubMed

    Dittrich, Kerstin; Stahl, Christoph

    2012-06-01

    Load theory predicts that concurrent cognitive load impairs selective attention. For visual stimuli, it has been shown that this impairment can be selective: Distraction was specifically increased when the stimulus material used in the cognitive load task matches that of the selective attention task. Here, we report four experiments that demonstrate such selective load effects for auditory selective attention. The effect of two different cognitive load tasks on two different auditory Stroop tasks was examined, and selective load effects were observed: Interference in a nonverbal-auditory Stroop task was increased under concurrent nonverbal-auditory cognitive load (compared with a no-load condition), but not under concurrent verbal-auditory cognitive load. By contrast, interference in a verbal-auditory Stroop task was increased under concurrent verbal-auditory cognitive load but not under nonverbal-auditory cognitive load. This double-dissociation pattern suggests the existence of different and separable verbal and nonverbal processing resources in the auditory domain.

  13. Auditory hallucinations induced by trazodone

    PubMed Central

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji

    2014-01-01

    A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048

  14. Transient human auditory cortex activation during volitional attention shifting

    PubMed Central

    Uhlig, Christian Harm; Gutschalk, Alexander

    2017-01-01

    While strong activation of auditory cortex is generally found for exogenous orienting of attention, endogenous, intra-modal shifting of auditory attention has not yet been demonstrated to evoke transient activation of the auditory cortex. Here, we used fMRI to test if endogenous shifting of attention is also associated with transient activation of the auditory cortex. In contrast to previous studies, attention shifts were completely self-initiated and not cued by transient auditory or visual stimuli. Stimuli were two dichotic, continuous streams of tones, whose perceptual grouping was not ambiguous. Participants were instructed to continuously focus on one of the streams and switch between the two after a while, indicating the time and direction of each attentional shift by pressing one of two response buttons. The BOLD response around the time of the button presses revealed robust activation of the auditory cortex, along with activation of a distributed task network. To test if the transient auditory cortex activation was specifically related to auditory orienting, a self-paced motor task was added, where participants were instructed to ignore the auditory stimulation while they pressed the response buttons in alternation and at a similar pace. Results showed that attentional orienting produced stronger activity in auditory cortex, but auditory cortex activation was also observed for button presses without focused attention to the auditory stimulus. The response related to attention shifting was stronger contralateral to the side where attention was shifted to. Contralateral-dominant activation was also observed in dorsal parietal cortex areas, confirming previous observations for auditory attention shifting in studies that used auditory cues. PMID:28273110

  15. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    PubMed

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  16. McGurk illusion recalibrates subsequent auditory perception

    PubMed Central

    Lüttke, Claudia S.; Ekman, Matthias; van Gerven, Marcel A. J.; de Lange, Floris P.

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960

  17. Auditory Spatial Attention Representations in the Human Cerebral Cortex

    PubMed Central

    Kong, Lingqiang; Michalka, Samantha W.; Rosen, Maya L.; Sheremata, Summer L.; Swisher, Jascha D.; Shinn-Cunningham, Barbara G.; Somers, David C.

    2014-01-01

    Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753

  18. Single Neurons in the Avian Auditory Cortex Encode Individual Identity and Propagation Distance in Naturally Degraded Communication Calls.

    PubMed

    Mouterde, Solveig C; Elie, Julie E; Mathevon, Nicolas; Theunissen, Frédéric E

    2017-03-29

    One of the most complex tasks performed by sensory systems is "scene analysis": the interpretation of complex signals as behaviorally relevant objects. The study of this problem, universal to species and sensory modalities, is particularly challenging in audition, where sounds from various sources and localizations, degraded by propagation through the environment, sum to form a single acoustical signal. Here we investigated in a songbird model, the zebra finch, the neural substrate for ranging and identifying a single source. We relied on ecologically and behaviorally relevant stimuli, contact calls, to investigate the neural discrimination of individual vocal signature as well as sound source distance when calls have been degraded through propagation in a natural environment. Performing electrophysiological recordings in anesthetized birds, we found neurons in the auditory forebrain that discriminate individual vocal signatures despite long-range degradation, as well as neurons discriminating propagation distance, with varying degrees of multiplexing between both information types. Moreover, the neural discrimination performance of individual identity was not affected by propagation-induced degradation beyond what was induced by the decreased intensity. For the first time, neurons with distance-invariant identity discrimination properties as well as distance-discriminant neurons are revealed in the avian auditory cortex. Because these neurons were recorded in animals that had prior experience neither with the vocalizers of the stimuli nor with long-range propagation of calls, we suggest that this neural population is part of a general-purpose system for vocalizer discrimination and ranging. SIGNIFICANCE STATEMENT Understanding how the brain makes sense of the multitude of stimuli that it continually receives in natural conditions is a challenge for scientists. Here we provide a new understanding of how the auditory system extracts behaviorally relevant information, the vocalizer identity and its distance to the listener, from acoustic signals that have been degraded by long-range propagation in natural conditions. We show, for the first time, that single neurons, in the auditory cortex of zebra finches, are capable of discriminating the individual identity and sound source distance in conspecific communication calls. The discrimination of identity in propagated calls relies on a neural coding that is robust to intensity changes, signals' quality, and decreases in the signal-to-noise ratio. Copyright © 2017 Mouterde et al.

  19. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    PubMed

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Degraded Auditory Processing in a Rat Model of Autism Limits the Speech Representation in Non-primary Auditory Cortex

    PubMed Central

    Engineer, C.T.; Centanni, T.M.; Im, K.W.; Borland, M.S.; Moreno, N.A.; Carraway, R.S.; Wilson, L.G.; Kilgard, M.P.

    2014-01-01

    Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism. PMID:24639033

  1. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  2. [Auditory training in workshops: group therapy option].

    PubMed

    Santos, Juliana Nunes; do Couto, Isabel Cristina Plais; Amorim, Raquel Martins da Costa

    2006-01-01

    auditory training in groups. to verify in a group of individuals with mental retardation the efficacy of auditory training in a workshop environment. METHOD a longitudinal prospective study with 13 mentally retarded individuals from the Associação de Pais e Amigos do Excepcional (APAE) of Congonhas divided in two groups: case (n=5) and control (n=8) and who were submitted to ten auditory training sessions after verifying the integrity of the peripheral auditory system through evoked otoacoustic emissions. Participants were evaluated using a specific protocol concerning the auditory abilities (sound localization, auditory identification, memory, sequencing, auditory discrimination and auditory comprehension) at the beginning and at the end of the project. Data (entering, processing and analyses) were analyzed by the Epi Info 6.04 software. the groups did not differ regarding aspects of age (mean = 23.6 years) and gender (40% male). In the first evaluation both groups presented similar performances. In the final evaluation an improvement in the auditory abilities was observed for the individuals in the case group. When comparing the mean number of correct answers obtained by both groups in the first and final evaluations, a statistically significant result was obtained for sound localization (p=0.02), auditory sequencing (p=0.006) and auditory discrimination (p=0.03). group auditory training demonstrated to be effective in individuals with mental retardation, observing an improvement in the auditory abilities. More studies, with a larger number of participants, are necessary in order to confirm the findings of the present research. These results will help public health professionals to reanalyze the theory models used for therapy, so that they can use specific methods according to individual needs, such as auditory training workshops.

  3. Short-term plasticity in auditory cognition.

    PubMed

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  4. A Perceptuo-Cognitive-Motor Approach to the Special Child.

    ERIC Educational Resources Information Center

    Kornblum, Rena Beth

    A movement therapist reviews ways in which a perceptuo-cognitive approach can help handicapped children in learning and in social adjustment. She identifies specific auditory problems (hearing loss, sound-ground confusion, auditory discrimination, auditory localization, auditory memory, auditory sequencing), visual problems (visual acuity,…

  5. Corticofugal modulation of peripheral auditory responses

    PubMed Central

    Terreros, Gonzalo; Delano, Paul H.

    2015-01-01

    The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body (MGB), inferior colliculus (IC), cochlear nucleus (CN) and superior olivary complex (SOC) reaching the cochlea through olivocochlear (OC) fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i) colliculo-thalamic-cortico-collicular; (ii) cortico-(collicular)-OC; and (iii) cortico-(collicular)-CN pathways. Recent experiments demonstrate that blocking ongoing auditory-cortex activity with pharmacological and physical methods modulates the amplitude of cochlear potentials. In addition, auditory-cortex microstimulation independently modulates cochlear sensitivity and the strength of the OC reflex. In this mini-review, anatomical and physiological evidence supporting the presence of a functional efferent network from the auditory cortex to the cochlear receptor is presented. Special emphasis is given to the corticofugal effects on initial auditory processing, that is, on CN, auditory nerve and cochlear responses. A working model of three parallel pathways from the auditory cortex to the cochlea and auditory nerve is proposed. PMID:26483647

  6. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    PubMed

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Auditory Task Irrelevance: A Basis for Inattentional Deafness

    PubMed Central

    Scheer, Menja; Bülthoff, Heinrich H.; Chuang, Lewis L.

    2018-01-01

    Objective This study investigates the neural basis of inattentional deafness, which could result from task irrelevance in the auditory modality. Background Humans can fail to respond to auditory alarms under high workload situations. This failure, termed inattentional deafness, is often attributed to high workload in the visual modality, which reduces one’s capacity for information processing. Besides this, our capacity for processing auditory information could also be selectively diminished if there is no obvious task relevance in the auditory channel. This could be another contributing factor given the rarity of auditory warnings. Method Forty-eight participants performed a visuomotor tracking task while auditory stimuli were presented: a frequent pure tone, an infrequent pure tone, and infrequent environmental sounds. Participants were required either to respond to the presentation of the infrequent pure tone (auditory task-relevant) or not (auditory task-irrelevant). We recorded and compared the event-related potentials (ERPs) that were generated by environmental sounds, which were always task-irrelevant for both groups. These ERPs served as an index for our participants’ awareness of the task-irrelevant auditory scene. Results Manipulation of auditory task relevance influenced the brain’s response to task-irrelevant environmental sounds. Specifically, the late novelty-P3 to irrelevant environmental sounds, which underlies working memory updating, was found to be selectively enhanced by auditory task relevance independent of visuomotor workload. Conclusion Task irrelevance in the auditory modality selectively reduces our brain’s responses to unexpected and irrelevant sounds regardless of visuomotor workload. Application Presenting relevant auditory information more often could mitigate the risk of inattentional deafness. PMID:29578754

  8. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS)

    PubMed Central

    Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom sound perception and potentially serve as an objective measure of central neural pathology. PMID:28604786

  9. Auditory habituation to simple tones: reduced evidence for habituation in children compared to adults

    PubMed Central

    Muenssinger, Jana; Stingl, Krunoslav T.; Matuz, Tamara; Binder, Gerhard; Ehehalt, Stefan; Preissl, Hubert

    2013-01-01

    Habituation—the response decrement to repetitively presented stimulation—is a basic cognitive capability and suited to investigate development and integrity of the human brain. To evaluate the developmental process of auditory habituation, the current study used magnetoencephalography (MEG) to investigate auditory habituation, dishabituation and stimulus specificity in children and adults and compared the results between age groups. Twenty-nine children (Mage = 9.69 years, SD ± 0.47) and 14 adults (Mage = 29.29 years, SD ± 3.47) participated in the study and passively listened to a habituation paradigm consisting of 100 trains of tones which were composed of five 500 Hz tones, one 750 Hz tone (dishabituator) and another two 500 Hz tones, respectively while focusing their attention on a silent movie. Adults showed the expected habituation and stimulus specificity within-trains while no response decrement was found between trains. Sensory adaptation or fatigue as a source for response decrement in adults is unlikely due to the strong reaction to the dishabituator (stimulus specificity) and strong mismatch negativity (MMN) responses. However, in children neither habituation nor dishabituation or stimulus specificity could be found within-trains, response decrement was found across trains. It can be speculated that the differences between children and adults are linked to differences in stimulus processing due to attentional processes. This study shows developmental differences in task-related brain activation and discusses the possible influence of broader concepts such as attention, which should be taken into account when comparing performance in an identical task between age groups. PMID:23882207

  10. Information processing capacity while wearing personal protective eyewear.

    PubMed

    Wade, Chip; Davis, Jerry; Marzilli, Thomas S; Weimar, Wendi H

    2006-08-15

    It is difficult to overemphasize the function vision plays in information processing, specifically in maintaining postural control. Vision appears to be an immediate, effortless event; suggesting that eyes need only to be open to employ the visual information provided by the environment. This study is focused on investigating the effect of Occupational Safety and Health Administration regulated personal protective eyewear (29 CFR 1910.133) on physiological and cognitive factors associated with information processing capabilities. Twenty-one college students between the ages of 19 and 25 years were randomly tested in each of three eyewear conditions (control, new and artificially aged) on an inclined and horizontal support surface for auditory and visual stimulus reaction time. Data collection trials consisted of 50 randomly selected (25 auditory, 25 visual) stimuli over a 10-min surface-eyewear condition trial. Auditory stimulus reaction time was significantly affected by the surface by eyewear interaction (F2,40 = 7.4; p < 0.05). Similarly, analysis revealed a significant surface by eyewear interaction in reaction time following the visual stimulus (F2,40 = 21.7; p < 0.05). The current findings do not trivialize the importance of personal protective eyewear usage in an occupational setting; rather, they suggest the value of future research focused on the effect that personal protective eyewear has on the physiological, cognitive and biomechanical contributions to postural control. These findings suggest that while personal protective eyewear may serve to protect an individual from eye injury, an individual's use of such personal protective eyewear may have deleterious effects on sensory information associated with information processing and postural control.

  11. Prefrontal Hemodynamics of Physical Activity and Environmental Complexity During Cognitive Work.

    PubMed

    McKendrick, Ryan; Mehta, Ranjana; Ayaz, Hasan; Scheldrup, Melissa; Parasuraman, Raja

    2017-02-01

    The aim of this study was to assess performance and cognitive states during cognitive work in the presence of physical work and in natural settings. Authors of previous studies have examined the interaction between cognitive and physical work, finding performance decrements in working memory. Neuroimaging has revealed increases and decreases in prefrontal oxygenated hemoglobin during the interaction of cognitive and physical work. The effect of environment on cognitive-physical dual tasking has not been previously considered. Thirteen participants were monitored with wireless functional near-infrared spectroscopy (fNIRS) as they performed an auditory 1-back task while sitting, walking indoors, and walking outdoors. Relative to sitting and walking indoors, auditory working memory performance declined when participants were walking outdoors. Sitting during the auditory 1-back task increased oxygenated hemoglobin and decreased deoxygenated hemoglobin in bilateral prefrontal cortex. Walking reduced the total hemoglobin available to bilateral prefrontal cortex. An increase in environmental complexity reduced oxygenated hemoglobin and increased deoxygenated hemoglobin in bilateral prefrontal cortex. Wireless fNIRS is capable of monitoring cognitive states in naturalistic environments. Selective attention and physical work compete with executive processing. During executive processing loading of selective attention and physical work results in deactivation of bilateral prefrontal cortex and degraded working memory performance, indicating that physical work and concomitant selective attention may supersede executive processing in the distribution of mental resources. This research informs decision-making procedures in work where working memory, physical activity, and attention interact. Where working memory is paramount, precautions should be taken to eliminate competition from physical work and selective attention.

  12. Assessment of functional conditions of basketball and football players during the load by applying the model of integrated evaluation.

    PubMed

    Zumbakytė-Šermukšnienė, Renata; Kajėnienė, Alma; Vainoras, Alfonsas; Berškienė, Kristina; Augutienė, Viktorija

    2010-01-01

    We consider the human body as an adaptable, complex, and dynamic system capable of organizing itself, though there is none, the only one, factor inside the system capable of doing this job. Making use of the computerized ECG analysis system "Kaunas-load" with parallel registration of ECG carrying out body motor characteristics, ABP, or other processes characterizing hemodynamics enable one to reveal and evaluate the synergistic aspects of essential systems of the human body what particularly extends the possibilities of functional diagnostics. The aim of the study was to determine the features of alterations in the functional condition of basketball and football players and nonathletes during the bicycle ergometry test by applying the model of evaluation of the functional condition of the human body. The study population consisted of 266 healthy athletes and nonathletes. Groups of male basketball players, male football players, male nonathletes, female basketball players, and female nonathletes were studied. A computerized ECG analysis system "Kaunas-load" that is capable of both registering and analyzing the power developed by the subject and 12-lead ECG synchronically were used for evaluating the functional condition of the CVS. The subject did a computer-based bicycle ergometry test. The following ECG parameters at rest and throughout the load - HR, JT interval, and the deduced JT/RR ratio index that reflects the condition between regulatory and supplying systems - were evaluated. After measuring ABP, the pulse amplitude (S-D) was evaluated. The pulse blood pressure ratio amplitude (S-D)/S that depicts the connection between the periphery and regulatory systems was also evaluated. Speeds of changes in physiological parameters during physical load were evaluated too. Heart rate and JT/RR ratio of athletes at the rest and during load were lower, and JT interval of rest was longer and became shorter more slowly during load, compared to that of healthy nonathletes. The pulse arterial blood pressure amplitude of men at rest and during load was higher than that of women. The pulse ABP amplitude of athletes was higher than that of nonathletes. The relative pulse ABP amplitude in the state of rest in the groups of men was higher than in groups of women. The relative pulse amplitude of female basketball players at rest and during load was higher than that of female nonathletes. Significant differences in the dynamics of speed of changes in HR, the pulse ABP amplitude, and the relative pulse ABP amplitude of male and female basketball players, male football players, as well as male and female nonathletes were observed. The newly deduced parameters, namely, speeds of changes in the parameters with changes in the phase of the load reflect very well peculiarities of functional condition of the human body during bicycle ergometry test. The sum total of those newly deduced parameters and customary parameters reveals new functional peculiarities of the human body.

  13. Auditory Processing of Older Adults with Probable Mild Cognitive Impairment

    ERIC Educational Resources Information Center

    Edwards, Jerri D.; Lister, Jennifer J.; Elias, Maya N.; Tetlow, Amber M.; Sardina, Angela L.; Sadeq, Nasreen A.; Brandino, Amanda D.; Bush, Aryn L. Harrison

    2017-01-01

    Purpose: Studies suggest that deficits in auditory processing predict cognitive decline and dementia, but those studies included limited measures of auditory processing. The purpose of this study was to compare older adults with and without probable mild cognitive impairment (MCI) across two domains of auditory processing (auditory performance in…

  14. Auditory models for speech analysis

    NASA Astrophysics Data System (ADS)

    Maybury, Mark T.

    This paper reviews the psychophysical basis for auditory models and discusses their application to automatic speech recognition. First an overview of the human auditory system is presented, followed by a review of current knowledge gleaned from neurological and psychoacoustic experimentation. Next, a general framework describes established peripheral auditory models which are based on well-understood properties of the peripheral auditory system. This is followed by a discussion of current enhancements to that models to include nonlinearities and synchrony information as well as other higher auditory functions. Finally, the initial performance of auditory models in the task of speech recognition is examined and additional applications are mentioned.

  15. Naftidrofuryl affects neurite regeneration by injured adult auditory neurons.

    PubMed

    Lefebvre, P P; Staecker, H; Moonen, G; van de Water, T R

    1993-07-01

    Afferent auditory neurons are essential for the transmission of auditory information from Corti's organ to the central auditory pathway. Auditory neurons are very sensitive to acute insult and have a limited ability to regenerate injured neuronal processes. Therefore, these neurons appear to be a limiting factor in restoration of hearing function following an injury to the peripheral auditory receptor. In a previous study nerve growth factor (NGF) was shown to stimulate neurite repair but not survival of injured auditory neurons. In this study, we have demonstrated a neuritogenesis promoting effect of naftidrofuryl in an vitro model for injury to adult auditory neurons, i.e. dissociated cell cultures of adult rat spiral ganglia. Conversely, naftidrofuryl did not have any demonstrable survival promoting effect on these in vitro preparations of injured auditory neurons. The potential uses of this drug as a therapeutic agent in acute diseases of the inner ear are discussed in the light of these observations.

  16. Auditory pathways: anatomy and physiology.

    PubMed

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.

  17. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability

    PubMed Central

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-01-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  18. Auditory agnosia.

    PubMed

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  19. Development of the auditory system

    PubMed Central

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  20. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds.

    PubMed

    Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G

    2017-03-01

    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

  1. Cortical Representations of Speech in a Multitalker Auditory Scene.

    PubMed

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.

  2. Upper-Tropospheric Winds Derived from Geostationary Satellite Water Vapor Observations

    NASA Technical Reports Server (NTRS)

    Velden, Christopher S.; Hayden, Christopher M.; Nieman, Steven J.; Menzel, W. Paul; Wanzong, Steven; Goerss, James S.

    1997-01-01

    The coverage and quality of remotely sensed upper-tropospheric moisture parameters have improved considerably with the deployment of a new generation of operational geostationary meteorological satellites: GOES-8/9 and GMS-5. The GOES-8/9 water vapor imaging capabilities have increased as a result of improved radiometric sensitivity and higher spatial resolution. The addition of a water vapor sensing channel on the latest GMS permits nearly global viewing of upper-tropospheric water vapor (when joined with GOES and Meteosat) and enhances the commonality of geostationary meteorological satellite observing capabilities. Upper-tropospheric motions derived from sequential water vapor imagery provided by these satellites can be objectively extracted by automated techniques. Wind fields can be deduced in both cloudy and cloud-free environments. In addition to the spatially coherent nature of these vector fields, the GOES-8/9 multispectral water vapor sensing capabilities allow for determination of wind fields over multiple tropospheric layers in cloud-free environments. This article provides an update on the latest efforts to extract water vapor motion displacements over meteorological scales ranging from subsynoptic to global. The potential applications of these data to impact operations, numerical assimilation and prediction, and research studies are discussed.

  3. Weak Responses to Auditory Feedback Perturbation during Articulation in Persons Who Stutter: Evidence for Abnormal Auditory-Motor Transformation

    PubMed Central

    Cai, Shanqing; Beal, Deryk S.; Ghosh, Satrajit S.; Tiede, Mark K.; Guenther, Frank H.; Perkell, Joseph S.

    2012-01-01

    Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking) functions abnormally in the speech motor systems of persons who stutter (PWS). Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants’ compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls’ and had close-to-normal latencies (∼150 ms), but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05). Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands. PMID:22911857

  4. The Corticofugal Effects of Auditory Cortex Microstimulation on Auditory Nerve and Superior Olivary Complex Responses Are Mediated via Alpha-9 Nicotinic Receptor Subunit

    PubMed Central

    Aedo, Cristian; Terreros, Gonzalo; León, Alex; Delano, Paul H.

    2016-01-01

    Background and Objective The auditory efferent system is a complex network of descending pathways, which mainly originate in the primary auditory cortex and are directed to several auditory subcortical nuclei. These descending pathways are connected to olivocochlear neurons, which in turn make synapses with auditory nerve neurons and outer hair cells (OHC) of the cochlea. The olivocochlear function can be studied using contralateral acoustic stimulation, which suppresses auditory nerve and cochlear responses. In the present work, we tested the proposal that the corticofugal effects that modulate the strength of the olivocochlear reflex on auditory nerve responses are produced through cholinergic synapses between medial olivocochlear (MOC) neurons and OHCs via alpha-9/10 nicotinic receptors. Methods We used wild type (WT) and alpha-9 nicotinic receptor knock-out (KO) mice, which lack cholinergic transmission between MOC neurons and OHC, to record auditory cortex evoked potentials and to evaluate the consequences of auditory cortex electrical microstimulation in the effects produced by contralateral acoustic stimulation on auditory brainstem responses (ABR). Results Auditory cortex evoked potentials at 15 kHz were similar in WT and KO mice. We found that auditory cortex microstimulation produces an enhancement of contralateral noise suppression of ABR waves I and III in WT mice but not in KO mice. On the other hand, corticofugal modulations of wave V amplitudes were significant in both genotypes. Conclusion These findings show that the corticofugal modulation of contralateral acoustic suppressions of auditory nerve (ABR wave I) and superior olivary complex (ABR wave III) responses are mediated through MOC synapses. PMID:27195498

  5. Auditory spatial processing in Alzheimer’s disease

    PubMed Central

    Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer’s disease syndromic spectrum. PMID:25468732

  6. Auditory, Visual, and Auditory-Visual Perception of Vowels by Hearing-Impaired Children.

    ERIC Educational Resources Information Center

    Hack, Zarita Caplan; Erber, Norman P.

    1982-01-01

    Vowels were presented through auditory, visual, and auditory-visual modalities to 18 hearing impaired children (12 to 15 years old) having good, intermediate, and poor auditory word recognition skills. All the groups had difficulty with acoustic information and visual information alone. The first two groups had only moderate difficulty identifying…

  7. Effects of Methylphenidate (Ritalin) on Auditory Performance in Children with Attention and Auditory Processing Disorders.

    ERIC Educational Resources Information Center

    Tillery, Kim L.; Katz, Jack; Keller, Warren D.

    2000-01-01

    A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…

  8. LAMP: 100+ Systematic Exercise Lessons for Developing Linguistic Auditory Memory Patterns in Beginning Readers.

    ERIC Educational Resources Information Center

    Valett, Robert E.

    Research findings on auditory sequencing and auditory blending and fusion, auditory-visual integration, and language patterns are presented in support of the Linguistic Auditory Memory Patterns (LAMP) program. LAMP consists of 100 developmental lessons for young students with learning disabilities or language problems. The lessons are included in…

  9. Maturation of Visual and Auditory Temporal Processing in School-Aged Children

    ERIC Educational Resources Information Center

    Dawes, Piers; Bishop, Dorothy V. M.

    2008-01-01

    Purpose: To examine development of sensitivity to auditory and visual temporal processes in children and the association with standardized measures of auditory processing and communication. Methods: Normative data on tests of visual and auditory processing were collected on 18 adults and 98 children aged 6-10 years of age. Auditory processes…

  10. Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities

    ERIC Educational Resources Information Center

    Norrix, Linda W.; Plante, Elena; Vance, Rebecca

    2006-01-01

    Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…

  11. Segregating the neural correlates of physical and perceived change in auditory input using the change deafness effect.

    PubMed

    Puschmann, Sebastian; Weerda, Riklef; Klump, Georg; Thiel, Christiane M

    2013-05-01

    Psychophysical experiments show that auditory change detection can be disturbed in situations in which listeners have to monitor complex auditory input. We made use of this change deafness effect to segregate the neural correlates of physical change in auditory input from brain responses related to conscious change perception in an fMRI experiment. Participants listened to two successively presented complex auditory scenes, which consisted of six auditory streams, and had to decide whether scenes were identical or whether the frequency of one stream was changed between presentations. Our results show that physical changes in auditory input, independent of successful change detection, are represented at the level of auditory cortex. Activations related to conscious change perception, independent of physical change, were found in the insula and the ACC. Moreover, our data provide evidence for significant effective connectivity between auditory cortex and the insula in the case of correctly detected auditory changes, but not for missed changes. This underlines the importance of the insula/anterior cingulate network for conscious change detection.

  12. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    PubMed

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  13. Towards a Semantically-Enabled Control Strategy for Building Simulations: Integration of Semantic Technologies and Model Predictive Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgoshaei, Parastoo; Austin, Mark A.; Pertzborn, Amanda J.

    State-of-the-art building simulation control methods incorporate physical constraints into their mathematical models, but omit implicit constraints associated with policies of operation and dependency relationships among rules representing those constraints. To overcome these shortcomings, there is a recent trend in enabling the control strategies with inference-based rule checking capabilities. One solution is to exploit semantic web technologies in building simulation control. Such approaches provide the tools for semantic modeling of domains, and the ability to deduce new information based on the models through use of Description Logic (DL). In a step toward enabling this capability, this paper presents a cross-disciplinary data-drivenmore » control strategy for building energy management simulation that integrates semantic modeling and formal rule checking mechanisms into a Model Predictive Control (MPC) formulation. The results show that MPC provides superior levels of performance when initial conditions and inputs are derived from inference-based rules.« less

  14. A Systematic Analysis of a Deep Mouse Epididymal Sperm Proteome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chauvin, Theodore; Xie, Fang; Liu, Tao

    Spermatozoa are highly specialized cells that, when mature, are capable of navigating the female reproductive tract and fertilizing an oocyte. The sperm cell is thought to be largely quiescent in terms of transcriptional and translational activity. As a result, once it has left the male reproductive tract, the sperm cell is essentially operating with a static population of proteins. It is therefore theoretically possible to understand the protein networks contained in a sperm cell and to deduce its cellular function capabilities. To this end we have performed a proteomic analysis of mouse sperm isolated from the cauda epididymis and havemore » confidently identified 2,850 proteins, which is the most comprehensive sperm proteome for any species reported to date. These proteins comprise many complete cellular pathways, including those for energy production via glycolysis, β-oxidation and oxidative phosphorylation, protein folding and transport, and cell signaling systems. This proteome should prove a useful tool for assembly and testing of protein networks important for sperm function.« less

  15. Research on capability of detecting ballistic missile by near space infrared system

    NASA Astrophysics Data System (ADS)

    Lu, Li; Sheng, Wen; Jiang, Wei; Jiang, Feng

    2018-01-01

    The infrared detection technology of ballistic missile based on near space platform can effectively make up the shortcomings of high-cost of traditional early warning satellites and the limited earth curvature of ground-based early warning radar. In terms of target detection capability, aiming at the problem that the formula of the action distance based on contrast performance ignores the background emissivity in the calculation process and the formula is only valid for the monochromatic light, an improved formula of the detecting range based on contrast performance is proposed. The near space infrared imaging system parameters are introduced, the expression of the contrastive action distance formula based on the target detection of the near space platform is deduced. The detection range of the near space infrared system for the booster stage ballistic missile skin, the tail nozzle and the tail flame is calculated. The simulation results show that the near-space infrared system has the best effect on the detection of tail-flame radiation.

  16. PSPICE controlled-source models of analogous circuit for Langevin type piezoelectric transducer

    NASA Astrophysics Data System (ADS)

    Chen, Yeongchin; Wu, Menqjiun; Liu, Weikuo

    2007-02-01

    The design and construction of wide-band and high efficiency acoustical projector has long been considered an art beyond the capabilities of many smaller groups. Langevin type piezoelectric transducers have been the most candidate of sonar array system applied in underwater communication. The transducers are fabricated, by bolting head mass and tail mass on both ends of stacked piezoelectric ceramic, to satisfy the multiple, conflicting design for high power transmitting capability. The aim of this research is to study the characteristics of Langevin type piezoelectric transducer that depend on different metal loading. First, the Mason equivalent circuit is used to model the segmented piezoelectric ceramic, then, the impedance network of tail and head masses is deduced by the Newton’s theory. To obtain the optimal solution to a specific design formulation, PSPICE controlled-source programming techniques can be applied. A valid example of the application of PSPICE models for Langevin type transducer analysis is presented and the simulation results are in good agreement with the experimental measurements.

  17. Guided waves dispersion equations for orthotropic multilayered pipes solved using standard finite elements code.

    PubMed

    Predoi, Mihai Valentin

    2014-09-01

    The dispersion curves for hollow multilayered cylinders are prerequisites in any practical guided waves application on such structures. The equations for homogeneous isotropic materials have been established more than 120 years ago. The difficulties in finding numerical solutions to analytic expressions remain considerable, especially if the materials are orthotropic visco-elastic as in the composites used for pipes in the last decades. Among other numerical techniques, the semi-analytical finite elements method has proven its capability of solving this problem. Two possibilities exist to model a finite elements eigenvalue problem: a two-dimensional cross-section model of the pipe or a radial segment model, intersecting the layers between the inner and the outer radius of the pipe. The last possibility is here adopted and distinct differential problems are deduced for longitudinal L(0,n), torsional T(0,n) and flexural F(m,n) modes. Eigenvalue problems are deduced for the three modes classes, offering explicit forms of each coefficient for the matrices used in an available general purpose finite elements code. Comparisons with existing solutions for pipes filled with non-linear viscoelastic fluid or visco-elastic coatings as well as for a fully orthotropic hollow cylinder are all proving the reliability and ease of use of this method. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Audiological and electrophysiological evaluation of children with acquired immunodeficiency syndrome (AIDS).

    PubMed

    Matas, Carla Gentile; Leite, Renata Aparecida; Magliaro, Fernanda Cristina Leite; Gonçalves, Isabela Crivellaro

    2006-08-01

    We examined the peripheral auditory system and the auditory brainstem pathway of children with Acquired Immunodeficiency Syndrome (AIDS). One hundred and one children, 51 with AIDS diagnosis and 50 normal children were evaluated. Audiological assessment included immittance measures, pure tone and speech audiometry and auditory brainstem response (ABR). The children with AIDS more frequently had abnormal results than did their matched controls, presenting either peripheral or auditory brainstem impairment. We suggest that AIDS be considered a risk factor for peripheral and/or auditory brainstem disorders. Further research should be carried out to investigate the auditory effects of HIV infection along the auditory pathway.

  19. Neuronal activity in primate auditory cortex during the performance of audiovisual tasks.

    PubMed

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2015-03-01

    This study aimed at a deeper understanding of which cognitive and motivational aspects of tasks affect auditory cortical activity. To this end we trained two macaque monkeys to perform two different tasks on the same audiovisual stimulus and to do this with two different sizes of water rewards. The monkeys had to touch a bar after a tone had been turned on together with an LED, and to hold the bar until either the tone (auditory task) or the LED (visual task) was turned off. In 399 multiunits recorded from core fields of auditory cortex we confirmed that during task engagement neurons responded to auditory and non-auditory stimuli that were task-relevant, such as light and water. We also confirmed that firing rates slowly increased or decreased for several seconds during various phases of the tasks. Responses to non-auditory stimuli and slow firing changes were observed during both the auditory and the visual task, with some differences between them. There was also a weak task-dependent modulation of the responses to auditory stimuli. In contrast to these cognitive aspects, motivational aspects of the tasks were not reflected in the firing, except during delivery of the water reward. In conclusion, the present study supports our previous proposal that there are two response types in the auditory cortex that represent the timing and the type of auditory and non-auditory elements of a auditory tasks as well the association between elements. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. A Circuit for Motor Cortical Modulation of Auditory Cortical Activity

    PubMed Central

    Nelson, Anders; Schneider, David M.; Takatoh, Jun; Sakurai, Katsuyasu; Wang, Fan

    2013-01-01

    Normal hearing depends on the ability to distinguish self-generated sounds from other sounds, and this ability is thought to involve neural circuits that convey copies of motor command signals to various levels of the auditory system. Although such interactions at the cortical level are believed to facilitate auditory comprehension during movements and drive auditory hallucinations in pathological states, the synaptic organization and function of circuitry linking the motor and auditory cortices remain unclear. Here we describe experiments in the mouse that characterize circuitry well suited to transmit motor-related signals to the auditory cortex. Using retrograde viral tracing, we established that neurons in superficial and deep layers of the medial agranular motor cortex (M2) project directly to the auditory cortex and that the axons of some of these deep-layer cells also target brainstem motor regions. Using in vitro whole-cell physiology, optogenetics, and pharmacology, we determined that M2 axons make excitatory synapses in the auditory cortex but exert a primarily suppressive effect on auditory cortical neuron activity mediated in part by feedforward inhibition involving parvalbumin-positive interneurons. Using in vivo intracellular physiology, optogenetics, and sound playback, we also found that directly activating M2 axon terminals in the auditory cortex suppresses spontaneous and stimulus-evoked synaptic activity in auditory cortical neurons and that this effect depends on the relative timing of motor cortical activity and auditory stimulation. These experiments delineate the structural and functional properties of a corticocortical circuit that could enable movement-related suppression of auditory cortical activity. PMID:24005287

  1. Human Engineer’s Guide to Auditory Displays. Volume 2. Elements of Signal Reception and Resolution Affecting Auditory Displays.

    DTIC Science & Technology

    1984-08-01

    90de It noce..etrv wnd identify by block numberl .’-- This work reviews the areas of monaural and binaural signal detection, auditory discrimination And...AUDITORY DISPLAYS This work reviews the areas of monaural and binaural signal detection, auditory discrimination and localization, and reaction times to...pertaining to the major areas of auditory processing in humans. The areas covered in the reviews presented here are monaural and binaural siqnal detection

  2. [Assessment of the efficiency of the auditory training in children with dyslalia and auditory processing disorders].

    PubMed

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam

    2011-01-01

    To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.

  3. The Contribution of Brainstem and Cerebellar Pathways to Auditory Recognition

    PubMed Central

    McLachlan, Neil M.; Wilson, Sarah J.

    2017-01-01

    The cerebellum has been known to play an important role in motor functions for many years. More recently its role has been expanded to include a range of cognitive and sensory-motor processes, and substantial neuroimaging and clinical evidence now points to cerebellar involvement in most auditory processing tasks. In particular, an increase in the size of the cerebellum over recent human evolution has been attributed in part to the development of speech. Despite this, the auditory cognition literature has largely overlooked afferent auditory connections to the cerebellum that have been implicated in acoustically conditioned reflexes in animals, and could subserve speech and other auditory processing in humans. This review expands our understanding of auditory processing by incorporating cerebellar pathways into the anatomy and functions of the human auditory system. We reason that plasticity in the cerebellar pathways underpins implicit learning of spectrotemporal information necessary for sound and speech recognition. Once learnt, this information automatically recognizes incoming auditory signals and predicts likely subsequent information based on previous experience. Since sound recognition processes involving the brainstem and cerebellum initiate early in auditory processing, learnt information stored in cerebellar memory templates could then support a range of auditory processing functions such as streaming, habituation, the integration of auditory feature information such as pitch, and the recognition of vocal communications. PMID:28373850

  4. Minimal effects of visual memory training on the auditory performance of adult cochlear implant users

    PubMed Central

    Oba, Sandra I.; Galvin, John J.; Fu, Qian-Jie

    2014-01-01

    Auditory training has been shown to significantly improve cochlear implant (CI) users’ speech and music perception. However, it is unclear whether post-training gains in performance were due to improved auditory perception or to generally improved attention, memory and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory were assessed in ten CI users before, during, and after training with a non-auditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Post-training gains were much smaller with the non-auditory VDS training than observed in previous auditory training studies with CI users. The results suggest that post-training gains observed in previous studies were not solely attributable to improved attention or memory, and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception. PMID:23516087

  5. The Role of Auditory Cues in the Spatial Knowledge of Blind Individuals

    ERIC Educational Resources Information Center

    Papadopoulos, Konstantinos; Papadimitriou, Kimon; Koutsoklenis, Athanasios

    2012-01-01

    The study presented here sought to explore the role of auditory cues in the spatial knowledge of blind individuals by examining the relation between the perceived auditory cues and the landscape of a given area and by investigating how blind individuals use auditory cues to create cognitive maps. The findings reveal that several auditory cues…

  6. Enhanced Development of Auditory Change Detection in Musically Trained School-Aged Children: A Longitudinal Event-Related Potential Study

    ERIC Educational Resources Information Center

    Putkinen, Vesa; Tervaniemi, Mari; Saarikivi, Katri; Ojala, Pauliina; Huotilainen, Minna

    2014-01-01

    Adult musicians show superior auditory discrimination skills when compared to non-musicians. The enhanced auditory skills of musicians are reflected in the augmented amplitudes of their auditory event-related potential (ERP) responses. In the current study, we investigated longitudinally the development of auditory discrimination skills in…

  7. Neural stem/progenitor cell properties of glial cells in the adult mouse auditory nerve

    PubMed Central

    Lang, Hainan; Xing, Yazhi; Brown, LaShardai N.; Samuvel, Devadoss J.; Panganiban, Clarisse H.; Havens, Luke T.; Balasubramanian, Sundaravadivel; Wegner, Michael; Krug, Edward L.; Barth, Jeremy L.

    2015-01-01

    The auditory nerve is the primary conveyor of hearing information from sensory hair cells to the brain. It has been believed that loss of the auditory nerve is irreversible in the adult mammalian ear, resulting in sensorineural hearing loss. We examined the regenerative potential of the auditory nerve in a mouse model of auditory neuropathy. Following neuronal degeneration, quiescent glial cells converted to an activated state showing a decrease in nuclear chromatin condensation, altered histone deacetylase expression and up-regulation of numerous genes associated with neurogenesis or development. Neurosphere formation assays showed that adult auditory nerves contain neural stem/progenitor cells (NSPs) that were within a Sox2-positive glial population. Production of neurospheres from auditory nerve cells was stimulated by acute neuronal injury and hypoxic conditioning. These results demonstrate that a subset of glial cells in the adult auditory nerve exhibit several characteristics of NSPs and are therefore potential targets for promoting auditory nerve regeneration. PMID:26307538

  8. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  9. Auditory Cortex Is Required for Fear Potentiation of Gap Detection

    PubMed Central

    Weible, Aldis P.; Liu, Christine; Niell, Cristopher M.

    2014-01-01

    Auditory cortex is necessary for the perceptual detection of brief gaps in noise, but is not necessary for many other auditory tasks such as frequency discrimination, prepulse inhibition of startle responses, or fear conditioning with pure tones. It remains unclear why auditory cortex should be necessary for some auditory tasks but not others. One possibility is that auditory cortex is causally involved in gap detection and other forms of temporal processing in order to associate meaning with temporally structured sounds. This predicts that auditory cortex should be necessary for associating meaning with gaps. To test this prediction, we developed a fear conditioning paradigm for mice based on gap detection. We found that pairing a 10 or 100 ms gap with an aversive stimulus caused a robust enhancement of gap detection measured 6 h later, which we refer to as fear potentiation of gap detection. Optogenetic suppression of auditory cortex during pairing abolished this fear potentiation, indicating that auditory cortex is critically involved in associating temporally structured sounds with emotionally salient events. PMID:25392510

  10. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture

    PubMed Central

    2017-01-01

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238

  11. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    PubMed

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.

  12. Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling

    DOE PAGES

    Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.; ...

    2017-06-30

    Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less

  13. Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.

    Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less

  14. A basic study on universal design of auditory signals in automobiles.

    PubMed

    Yamauchi, Katsuya; Choi, Jong-dae; Maiguma, Ryo; Takada, Masayuki; Iwamiya, Shin-ichiro

    2004-11-01

    In this paper, the impression of various kinds of auditory signals currently used in automobiles and a comprehensive evaluation were measured by a semantic differential method. The desirable acoustic characteristic was examined for each type of auditory signal. Sharp sounds with dominant high-frequency components were not suitable for auditory signals in automobiles. This trend is expedient for the aged whose auditory sensitivity in the high frequency region is lower. When intermittent sounds were used, a longer OFF time was suitable. Generally, "dull (not sharp)" and "calm" sounds were appropriate for auditory signals. Furthermore, the comparison between the frequency spectrum of interior noise in automobiles and that of suitable sounds for various auditory signals indicates that the suitable sounds are not easily masked. The suitable auditory signals for various purposes is a good solution from the viewpoint of universal design.

  15. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    PubMed

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  16. [A case of pure word deafness and auditory agnosia associated with bilateral temporo-parietal lesions].

    PubMed

    Sato, M; Yasui, N; Isobe, I; Kobayashi, T

    1982-10-01

    A-49-year-old right-handed female was reported. She showed pure word deafness and auditory agnosia because of bilateral temporo-parietal lesions. The left lesion resulted from angiospasm of the left anterior and middle cerebral arteries after subarachnoid hemorrhage due to a ruptured aneurysm of the left carotid artery, and the right one resulted from subcortical hematoma after the V-P shunt operation. CT scan revealed the abnormal low density area on the bilateral temporo-parietal regions seven months after onset. Neurophychological findings were as follows: there were no aphasic symptoms such as paraphasia, word finding difficulties, or disturbances of spontaneous writing, reading and calculation. But her auditory comprehension was severely disturbed, and she could neither repeat words after the tester nor write from dictation. She also could not recognize meaningful sounds and music in spite of normal hearing sensitivity for pure tone, BSR and AER. We discussed the neuropsychological mechanisms of auditory recognition, and assumed that each hemisphere might process both verbal and non-verbal auditory stimuli in the secondary auditory area. The auditory input may be recognized at the left association area, the final level of this mechanism. Pure word deafness and auditory agnosia of this case might be caused by the disruption of the right secondary auditory area, the pathway between the left primary auditory area and the left secondary auditory area, and between the left and right secondary auditory areas.

  17. Neural correlates of auditory scene analysis and perception

    PubMed Central

    Cohen, Yale E.

    2014-01-01

    The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354

  18. Auditory Reserve and the Legacy of Auditory Experience

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2014-01-01

    Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function. PMID:25405381

  19. Auditory reafferences: the influence of real-time feedback on movement control.

    PubMed

    Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus

    2015-01-01

    Auditory reafferences are real-time auditory products created by a person's own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action-perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  20. Visual form predictions facilitate auditory processing at the N1.

    PubMed

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2017-02-20

    Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.

  1. Electrophysiological Evidence for the Sources of the Masking Level Difference.

    PubMed

    Fowler, Cynthia G

    2017-08-16

    The purpose of this review article is to review evidence from auditory evoked potential studies to describe the contributions of the auditory brainstem and cortex to the generation of the masking level difference (MLD). A literature review was performed, focusing on the auditory brainstem, middle, and late latency responses used in protocols similar to those used to generate the behavioral MLD. Temporal coding of the signals necessary for generating the MLD occurs in the auditory periphery and brainstem. Brainstem disorders up to wave III of the auditory brainstem response (ABR) can disrupt the MLD. The full MLD requires input to the generators of the auditory late latency potentials to produce all characteristics of the MLD; these characteristics include threshold differences for various binaural signal and noise conditions. Studies using central auditory lesions are beginning to identify the cortical effects on the MLD. The MLD requires auditory processing from the periphery to cortical areas. A healthy auditory periphery and brainstem codes temporal synchrony, which is essential for the ABR. Threshold differences require engaging cortical function beyond the primary auditory cortex. More studies using cortical lesions and evoked potentials or imaging should clarify the specific cortical areas involved in the MLD.

  2. Amphibious hearing in ringed seals (Pusa hispida): underwater audiograms, aerial audiograms and critical ratio measurements.

    PubMed

    Sills, Jillian M; Southall, Brandon L; Reichmuth, Colleen

    2015-07-01

    Ringed seals (Pusa hispida) are semi-aquatic marine mammals with a circumpolar Arctic distribution. In this study, we investigate the amphibious hearing capabilities of ringed seals to provide auditory profiles for this species across the full range of hearing. Using psychophysical methods with two trained ringed seals, detection thresholds for narrowband signals were measured under quiet, carefully controlled environmental conditions to generate aerial and underwater audiograms. Masked underwater thresholds were measured in the presence of octave-band noise to determine critical ratios. Results indicate that ringed seals possess hearing abilities comparable to those of spotted seals (Phoca largha) and harbor seals (Phoca vitulina), and considerably better than previously reported for ringed and harp seals. Best sensitivity was 49 dB re. 1 µPa (12.8 kHz) in water, and -12 dB re. 20 µPa (4.5 kHz) in air, rivaling the acute hearing abilities of some fully aquatic and terrestrial species in their respective media. Critical ratio measurements ranged from 14 dB at 0.1 kHz to 31 dB at 25.6 kHz, suggesting that ringed seals--like other true seals--can efficiently extract signals from background noise across a broad range of frequencies. The work described herein extends similar research on amphibious hearing in spotted seals recently published by the authors. These parallel studies enhance our knowledge of the auditory capabilities of ice-living seals, and inform effective management strategies for these and related species in a rapidly changing Arctic environment. © 2015. Published by The Company of Biologists Ltd.

  3. Behavioral Signs of (Central) Auditory Processing Disorder in Children With Nonsyndromic Cleft Lip and/or Palate: A Parental Questionnaire Approach.

    PubMed

    Ma, Xiaoran; McPherson, Bradley; Ma, Lian

    2016-03-01

    Objective Children with nonsyndromic cleft lip and/or palate often have a high prevalence of middle ear dysfunction. However, there are also indications that they may have a higher prevalence of (central) auditory processing disorder. This study used Fisher's Auditory Problems Checklist for caregivers to determine whether children with nonsyndromic cleft lip and/or palate have potentially more auditory processing difficulties compared with craniofacially normal children. Methods Caregivers of 147 school-aged children with nonsyndromic cleft lip and/or palate were recruited for the study. This group was divided into three subgroups: cleft lip, cleft palate, and cleft lip and palate. Caregivers of 60 craniofacially normal children were recruited as a control group. Hearing health tests were conducted to evaluate peripheral hearing. Caregivers of children who passed this assessment battery completed Fisher's Auditory Problems Checklist, which contains 25 questions related to behaviors linked to (central) auditory processing disorder. Results Children with cleft palate showed the lowest scores on the Fisher's Auditory Problems Checklist questionnaire, consistent with a higher index of suspicion for (central) auditory processing disorder. There was a significant difference in the manifestation of (central) auditory processing disorder-linked behaviors between the cleft palate and the control groups. The most common behaviors reported in the nonsyndromic cleft lip and/or palate group were short attention span and reduced learning motivation, along with hearing difficulties in noise. Conclusion A higher occurrence of (central) auditory processing disorder-linked behaviors were found in children with nonsyndromic cleft lip and/or palate, particularly cleft palate. Auditory processing abilities should not be ignored in children with nonsyndromic cleft lip and/or palate, and it is necessary to consider assessment tests for (central) auditory processing disorder when an auditory diagnosis is made for this population.

  4. The Development of Auditory Perception in Children Following Auditory Brainstem Implantation

    PubMed Central

    Colletti, Liliana; Shannon, Robert V.; Colletti, Vittorio

    2014-01-01

    Auditory brainstem implants (ABI) can provide useful auditory perception and language development in deaf children who are not able to use a cochlear implant (CI). We prospectively followed-up a consecutive group of 64 deaf children up to 12 years following ABI implantation. The etiology of deafness in these children was: cochlear nerve aplasia in 49, auditory neuropathy in 1, cochlear malformations in 8, bilateral cochlear post-meningitic ossification in 3, NF2 in 2, and bilateral cochlear fractures due to a head injury in 1. Thirty five children had other congenital non-auditory disabilities. Twenty two children had previous CIs with no benefit. Fifty eight children were fitted with the Cochlear 24 ABI device and six with the MedEl ABI device and all children followed the same rehabilitation program. Auditory perceptual abilities were evaluated on the Categories of Auditory Performance (CAP) scale. No child was lost to follow-up and there were no exclusions from the study. All children showed significant improvement in auditory perception with implant experience. Seven children (11%) were able to achieve the highest score on the CAP test; they were able to converse on the telephone within 3 years of implantation. Twenty children (31.3%) achieved open set speech recognition (CAP score of 5 or greater) and 30 (46.9%) achieved a CAP level of 4 or greater. Of the 29 children without non-auditory disabilities, 18 (62%) achieved a CAP score of 5 or greater with the ABI. All children showed continued improvements in auditory skills over time. The long-term results of ABI implantation reveal significant auditory benefit in most children, and open set auditory recognition in many. PMID:25377987

  5. Clinical Features of Auditory Hallucinations in Patients With Dementia With Lewy Bodies: A Soundtrack of Visual Hallucinations.

    PubMed

    Tsunoda, Naoko; Hashimoto, Mamoru; Ishikawa, Tomohisa; Fukuhara, Ryuji; Yuki, Seiji; Tanaka, Hibiki; Hatada, Yutaka; Miyagawa, Yusuke; Ikeda, Manabu

    2018-05-08

    Auditory hallucinations are an important symptom for diagnosing dementia with Lewy bodies (DLB), yet they have received less attention than visual hallucinations. We investigated the clinical features of auditory hallucinations and the possible mechanisms by which they arise in patients with DLB. We recruited 124 consecutive patients with probable DLB (diagnosis based on the DLB International Workshop 2005 criteria; study period: June 2007-January 2015) from the dementia referral center of Kumamoto University Hospital. We used the Neuropsychiatric Inventory to assess the presence of auditory hallucinations, visual hallucinations, and other neuropsychiatric symptoms. We reviewed all available clinical records of patients with auditory hallucinations to assess their clinical features. We performed multiple logistic regression analysis to identify significant independent predictors of auditory hallucinations. Of the 124 patients, 44 (35.5%) had auditory hallucinations and 75 (60.5%) had visual hallucinations. The majority of patients (90.9%) with auditory hallucinations also had visual hallucinations. Auditory hallucinations consisted mostly of human voices, and 90% of patients described them as like hearing a soundtrack of the scene. Multiple logistic regression showed that the presence of auditory hallucinations was significantly associated with female sex (P = .04) and hearing impairment (P = .004). The analysis also revealed independent correlations between the presence of auditory hallucinations and visual hallucinations (P < .001), phantom boarder delusions (P = .001), and depression (P = .038). Auditory hallucinations are common neuropsychiatric symptoms in DLB and usually appear as a background soundtrack accompanying visual hallucinations. Auditory hallucinations in patients with DLB are more likely to occur in women and those with impaired hearing, depression, delusions, or visual hallucinations. © Copyright 2018 Physicians Postgraduate Press, Inc.

  6. Auditory Cortex Basal Activity Modulates Cochlear Responses in Chinchillas

    PubMed Central

    León, Alex; Elgueda, Diego; Silva, María A.; Hamamé, Carlos M.; Delano, Paul H.

    2012-01-01

    Background The auditory efferent system has unique neuroanatomical pathways that connect the cerebral cortex with sensory receptor cells. Pyramidal neurons located in layers V and VI of the primary auditory cortex constitute descending projections to the thalamus, inferior colliculus, and even directly to the superior olivary complex and to the cochlear nucleus. Efferent pathways are connected to the cochlear receptor by the olivocochlear system, which innervates outer hair cells and auditory nerve fibers. The functional role of the cortico-olivocochlear efferent system remains debated. We hypothesized that auditory cortex basal activity modulates cochlear and auditory-nerve afferent responses through the efferent system. Methodology/Principal Findings Cochlear microphonics (CM), auditory-nerve compound action potentials (CAP) and auditory cortex evoked potentials (ACEP) were recorded in twenty anesthetized chinchillas, before, during and after auditory cortex deactivation by two methods: lidocaine microinjections or cortical cooling with cryoloops. Auditory cortex deactivation induced a transient reduction in ACEP amplitudes in fifteen animals (deactivation experiments) and a permanent reduction in five chinchillas (lesion experiments). We found significant changes in the amplitude of CM in both types of experiments, being the most common effect a CM decrease found in fifteen animals. Concomitantly to CM amplitude changes, we found CAP increases in seven chinchillas and CAP reductions in thirteen animals. Although ACEP amplitudes were completely recovered after ninety minutes in deactivation experiments, only partial recovery was observed in the magnitudes of cochlear responses. Conclusions/Significance These results show that blocking ongoing auditory cortex activity modulates CM and CAP responses, demonstrating that cortico-olivocochlear circuits regulate auditory nerve and cochlear responses through a basal efferent tone. The diversity of the obtained effects suggests that there are at least two functional pathways from the auditory cortex to the cochlea. PMID:22558383

  7. Musical Experience, Auditory Perception and Reading-Related Skills in Children

    PubMed Central

    Banai, Karen; Ahissar, Merav

    2013-01-01

    Background The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Methodology/Principal Findings Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Conclusions/Significance Participants’ previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less likely to study music and if so, why this is the case. PMID:24086654

  8. Fundamental deficits of auditory perception in Wernicke's aphasia.

    PubMed

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen

    2013-01-01

    This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Acoustic duetting in Drosophila virilis relies on the integration of auditory and tactile signals

    PubMed Central

    LaRue, Kelly M; Clemens, Jan; Berman, Gordon J; Murthy, Mala

    2015-01-01

    Many animal species, including insects, are capable of acoustic duetting, a complex social behavior in which males and females tightly control the rate and timing of their courtship song syllables relative to each other. The mechanisms underlying duetting remain largely unknown across model systems. Most studies of duetting focus exclusively on acoustic interactions, but the use of multisensory cues should aid in coordinating behavior between individuals. To test this hypothesis, we develop Drosophila virilis as a new model for studies of duetting. By combining sensory manipulations, quantitative behavioral assays, and statistical modeling, we show that virilis females combine precisely timed auditory and tactile cues to drive song production and duetting. Tactile cues delivered to the abdomen and genitalia play the larger role in females, as even headless females continue to coordinate song production with courting males. These data, therefore, reveal a novel, non-acoustic, mechanism for acoustic duetting. Finally, our results indicate that female-duetting circuits are not sexually differentiated, as males can also produce ‘female-like’ duets in a context-dependent manner. DOI: http://dx.doi.org/10.7554/eLife.07277.001 PMID:26046297

  10. Neural Correlates of Multisensory Perceptual Learning

    PubMed Central

    Powers, Albert R.; Hevey, Matthew A.; Wallace, Mark T.

    2012-01-01

    The brain’s ability to bind incoming auditory and visual stimuli depends critically on the temporal structure of this information. Specifically, there exists a temporal window of audiovisual integration within which stimuli are highly likely to be perceived as part of the same environmental event. Several studies have described the temporal bounds of this window, but few have investigated its malleability. Recently, our laboratory has demonstrated that a perceptual training paradigm is capable of eliciting a 40% narrowing in the width of this window that is stable for at least one week after cessation of training. In the current study we sought to reveal the neural substrates of these changes. Eleven human subjects completed an audiovisual simultaneity judgment training paradigm, immediately before and after which they performed the same task during an event-related 3T fMRI session. The posterior superior temporal sulcus (pSTS) and areas of auditory and visual cortex exhibited robust BOLD decreases following training, and resting state and effective connectivity analyses revealed significant increases in coupling among these cortices after training. These results provide the first evidence of the neural correlates underlying changes in multisensory temporal binding and that likely represent the substrate for a multisensory temporal binding window. PMID:22553032

  11. Effects of single cycle binaural beat duration on auditory evoked potentials.

    PubMed

    Mihajloski, Todor; Bohorquez, Jorge; Özdamar, Özcan

    2014-01-01

    Binaural beat (BB) illusions are experienced as continuous central pulsations when two sounds with slightly different frequencies are delivered to each ear. It has been shown that steady-state auditory evoked potentials (AEPs) to BBs can be captured and investigated. The authors recently developed a new method of evoking transient AEPs to binaural beats using frequency modulated stimuli. This methodology was able to create single BBs in predetermined intervals with varying carrier frequencies. This study examines the effects of the BB duration and the frequency modulating component of the stimulus on the binaural beats and their evoked potentials. Normal hearing subjects were tested with a set of four durations (25, 50, 100, and 200 ms) with two stimulation configurations, binaural dichotic (binaural beats) and diotic (frequency modulation). The results obtained from the study showed that out of the given durations, the 100 ms beat, was capable of evoking the largest amplitude responses. The frequency modulation effect showed a decrease in peak amplitudes with increasing beat duration until their complete disappearance at 200 ms. Even though, at 200 ms, the frequency modulation effects were not present, the binaural beats were still perceived and captured as evoked potentials.

  12. Dolphin biosonar target detection in noise: wrap up of a past experiment.

    PubMed

    Au, Whitlow W L

    2014-07-01

    The target detection capability of bottlenose dolphins in the presence of artificial masking noise was first studied by Au and Penner [J. Acoust. Soc. Am. 70, 687-693 (1981)] in which the dolphins' target detection threshold was determined as a function of the ratio of the echo energy flux density and the estimated received noise spectral density. Such a metric was commonly used in human psychoacoustics despite the fact that the echo energy flux density is not compatible with noise spectral density which is averaged intensity per Hz. Since the earlier detection in noise studies, two important parameters, the dolphin integration time applicable to broadband clicks and the dolphin's auditory filter shape, were determined. The inclusion of these two parameters allows for the estimation of the received energy flux density of the masking noise so that the dolphin target detection can now be determined as a function of the ratio of the received energy of the echo over the received noise energy. Using an integration time of 264 μs and an auditory bandwidth of 16.7 kHz, the ratio of the echo energy to noise energy at the target detection threshold is approximately 1 dB.

  13. Auditory Sensory Substitution is Intuitive and Automatic with Texture Stimuli

    PubMed Central

    Stiles, Noelle R. B.; Shimojo, Shinsuke

    2015-01-01

    Millions of people are blind worldwide. Sensory substitution (SS) devices (e.g., vOICe) can assist the blind by encoding a video stream into a sound pattern, recruiting visual brain areas for auditory analysis via crossmodal interactions and plasticity. SS devices often require extensive training to attain limited functionality. In contrast to conventional attention-intensive SS training that starts with visual primitives (e.g., geometrical shapes), we argue that sensory substitution can be engaged efficiently by using stimuli (such as textures) associated with intrinsic crossmodal mappings. Crossmodal mappings link images with sounds and tactile patterns. We show that intuitive SS sounds can be matched to the correct images by naive sighted participants just as well as by intensively-trained participants. This result indicates that existing crossmodal interactions and amodal sensory cortical processing may be as important in the interpretation of patterns by SS as crossmodal plasticity (e.g., the strengthening of existing connections or the formation of new ones), especially at the earlier stages of SS usage. An SS training procedure based on crossmodal mappings could both considerably improve participant performance and shorten training times, thereby enabling SS devices to significantly expand blind capabilities. PMID:26490260

  14. Auditory evoked potentials in patients with major depressive disorder measured by Emotiv system.

    PubMed

    Wang, Dongcui; Mo, Fongming; Zhang, Yangde; Yang, Chao; Liu, Jun; Chen, Zhencheng; Zhao, Jinfeng

    2015-01-01

    In a previous study (unpublished), Emotiv headset was validated for capturing event-related potentials (ERPs) from normal subjects. In the present follow-up study, the signal quality of Emotiv headset was tested by the accuracy rate of discriminating Major Depressive Disorder (MDD) patients from the normal subjects. ERPs of 22 MDD patients and 15 normal subjects were induced by an auditory oddball task and the amplitude of N1, N2 and P3 of ERP components were specifically analyzed. The features of ERPs were statistically investigated. It is found that Emotiv headset is capable of discriminating the abnormal N1, N2 and P3 components in MDD patients. Relief-F algorithm was applied to all features for feature selection. The selected features were then input to a linear discriminant analysis (LDA) classifier with leave-one-out cross-validation to characterize the ERP features of MDD. 127 possible combinations out of the selected 7 ERP features were classified using LDA. The best classification accuracy was achieved to be 89.66%. These results suggest that MDD patients are identifiable from normal subjects by ERPs measured by Emotiv headset.

  15. Music as Environment: An Ecological and Biosemiotic Approach

    PubMed Central

    Reybrouck, Mark

    2014-01-01

    This paper provides an attempt to conceive of music in terms of a sounding environment. Starting from a definition of music as a collection of vibrational events, it introduces the distinction between discrete-symbolic representations as against analog-continuous representations of the sounds. The former makes it possible to conceive of music in terms of a Humboldt system, the latter in terms of an experiential approach. Both approaches, further, are not opposed to each other, but are complementary to some extent. There is, however, a distinction to be drawn between the bottom-up approach to auditory processing of environmental sounds and music, which is continuous and proceeding in real time, as against the top-down approach, which is proceeding at a level of mental representation by applying discrete symbolic labels to vibrational events. The distinction is discussed against the background of phylogenetic and ontogenetic claims, with a major focus on the innate auditory capabilities of the fetus and neonate and the gradual evolution from mere sensory perception of sound to sense-making and musical meaning. The latter, finally, is elaborated on the basis of the operational concepts of affordance and functional tone, thus bringing together some older contributions from ecology and biosemiotics. PMID:25545707

  16. Hair cell ribbon synapses

    PubMed Central

    Brandt, Andreas; Lysakowski, Anna

    2010-01-01

    Hearing and balance rely on the faithful synaptic coding of mechanical input by the auditory and vestibular hair cells of the inner ear. Mechanical deflection of their stereocilia causes the opening of mechanosensitive channels, resulting in hair cell depolarization, which controls the release of glutamate at ribbon-type synapses. Hair cells have a compact shape with strong polarity. Mechanoelectrical transduction and active membrane turnover associated with stereociliar renewal dominate the apical compartment. Transmitter release occurs at several active zones along the basolateral membrane. The astonishing capability of the hair cell ribbon synapse for temporally precise and reliable sensory coding has been the subject of intense investigation over the past few years. This research has been facilitated by the excellent experimental accessibility of the hair cell. For the same reason, the hair cell serves as an important model for studying presynaptic Ca2+ signaling and stimulus-secretion coupling. In addition to common principles, hair cell synapses differ in their anatomical and functional properties among species, among the auditory and vestibular organs, and among hair cell positions within the organ. Here, we briefly review synaptic morphology and connectivity and then focus on stimulus-secretion coupling at hair cell synapses. PMID:16944206

  17. An asymptotic membrane model for wrinkling of very thin films

    NASA Astrophysics Data System (ADS)

    Battista, Antonio; Hamdouni, Aziz; Millet, Olivier

    2018-05-01

    In this work, a formal deduction of a two-dimensional membrane theory, similar to Landau-Lifshitz model, is performed via an asymptotic development of the weak formulation of the three-dimensional equations of elasticity. Some interesting aspects of the deduced model are investigated, in particular the property of obtaining a hyperbolic equation for the out-of-plane displacement under a certain class of boundary conditions and loads. Some simple cases are analyzed to show the relevant aspects of the model and the phenomenology that can be addressed. In particular, it is shown how this mathematical formulation is capable to describe instabilities well known as wrinkling, often observed for the buckling of very thin membranes.

  18. Derivation and application of a class of generalized boundary conditions

    NASA Technical Reports Server (NTRS)

    Senior, Thomas B. A.; Volakis, John L.

    1989-01-01

    Boundary conditions involving higher order derivatives are presented for simulating surfaces whose reflection coefficients are known analytically, numerically, or experimentally. Procedures for determining the coefficients of the derivatives are discussed, along with the effect of displacing the surface where the boundary conditions are applied. Provided the coefficients satisfy a duality relation, equivalent forms of the boundary conditions involving tangential field components are deduced, and these provide the natural extension to nonplanar surfaces. As an illustration, the simulation of metal-backed uniform and three-layer dielectric coatings is given. It is shown that fourth order conditions are capable of providing an accurate simulation for uniform coating at least a quarter of a wavelength in thickness.

  19. Measuring Beam Sizes and Ultra-Small Electron Emittances Using an X-ray Pinhole Camera.

    PubMed

    Elleaume, P; Fortgang, C; Penel, C; Tarazona, E

    1995-09-01

    A very simple pinhole camera set-up has been built to diagnose the electron beam emittance of the ESRF. The pinhole is placed in the air next to an Al window. An image is obtained with a CCD camera imaging a fluorescent screen. The emittance is deduced from the size of the image. The relationship between the measured beam size and the electron beam emittance depends upon the lattice functions alpha, beta and eta, the screen resolution, pinhole size and photon beam divergence. The set-up is capable of measuring emittances as low as 5 pm rad and is presently routinely used as both an electron beam imaging device and an emittance diagnostic.

  20. Activation cross sections of the 64Ni(d,2n) reaction for the production of the medical radionuclide 64Cu

    NASA Astrophysics Data System (ADS)

    Hermanne, A.; Tárkányi, F.; Takács, S.; Kovalev, S. F.; Ignatyuk, A.

    2007-05-01

    Deuteron particle induced reactions for generation of 64Cu used in diagnostic and therapeutic nuclear medicine and its possible short lived contaminant 61Cu were investigated with the stacked foil activation technique on natural nickel targets up to Ed = 20.5 MeV. Excitation functions for the reactions 64Ni(d,2n) 64Cu and natNi(d, x) 61Cu are obtained by gamma spectroscopy and are compared with earlier literature values. Thick target yields have been deduced from the experimental cross sections and the predictive capability of the ALICE-IPPE, EMPIRE II and GNASH model codes is tested.

  1. Pulsed Inductive Thruster (PIT): Modeling and Validation Using the MACH2 Code

    NASA Technical Reports Server (NTRS)

    Schneider, Steven (Technical Monitor); Mikellides, Pavlos G.

    2003-01-01

    Numerical modeling of the Pulsed Inductive Thruster exercising the magnetohydrodynamics code, MACH2 aims to provide bilateral validation of the thruster's measured performance and the code's capability of capturing the pertinent physical processes. Computed impulse values for helium and argon propellants demonstrate excellent correlation to the experimental data for a range of energy levels and propellant-mass values. The effects of the vacuum tank wall and massinjection scheme were investigated to show trivial changes in the overall performance. An idealized model for these energy levels and propellants deduces that the energy expended to the internal energy modes and plasma dissipation processes is independent of the propellant type, mass, and energy level.

  2. Quench dynamics in strongly correlated Bose-Hubbard chains

    NASA Astrophysics Data System (ADS)

    Naegerl, Hanns-Christoph

    2013-05-01

    We present a series of experiments in the context of 1D physics with ultracold atoms, combining optical lattice potentials with the capability to tune the strength of the onsite particle interaction U. For an array of tilted 1D chains with site-to-site tilt E and initial unity occupation we record the dynamics after a quench to the phase transition point U ~E by monitoring the number of doublons created as a function of time after the quench. We observe characteristic oscillations from which we deduce a shift of the resonance condition as time progresses. For U/2 ~E and U/3 ~E we observe coupling to next-nearest neighbors and beyond.

  3. A microfluidic laser scattering sensor for label-free detection of waterborne pathogens

    NASA Astrophysics Data System (ADS)

    Wei, Huang; Yang, Limei; Li, Feng

    2016-10-01

    A microfluidic-based multi-angle laser scattering (MALS) sensor capable of acquiring scattering pattern of single particle is demonstrated. The size and relative refractive index (RI) of polystyrene (PS) microspheres were deduced with accuracies of 60 nm and 0.001 by analyzing the scattering patterns. We measured scattering patterns of waterborne parasites i.e., cryptosporidium parvum (c.parvum) and giardia lamblia (g.lamblia), and some other representative species in 1 L water within 1 hour, and the waterborne parasites were identified with accuracy better than 96% by classification of distinctive scattering patterns with a support-vector-machine (SVM) algorithm. The system provides a promising tool for label-free and rapid detection of waterborne parasites.

  4. The importance of individual frequencies of endogenous brain oscillations for auditory cognition - A short review.

    PubMed

    Baltus, Alina; Herrmann, Christoph Siegfried

    2016-06-01

    Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Diazepam reduces excitability of amygdala and further influences auditory cortex following sodium salicylate treatment in rats.

    PubMed

    Song, Yu; Liu, Junxiu; Ma, Furong; Mao, Lanqun

    2016-12-01

    Diazepam can reduce the excitability of lateral amygdala and eventually suppress the excitability of the auditory cortex in rats following salicylate treatment, indicating the regulating effect of lateral amygdala to the auditory cortex in the tinnitus procedure. To study the spontaneous firing rates (SFR) of the auditory cortex and lateral amygdala regulated by diazepam in the tinnitus rat model induced by sodium salicylate. This study first created a tinnitus rat modal induced by sodium salicylate, and recorded SFR of both auditory cortex and lateral amygdala. Then diazepam was intraperitoneally injected and the SFR changes of lateral amygdala recorded. Finally, diazepam was microinjected on lateral amygdala and the SFR changes of the auditory cortex recorded. Both SFRs of the auditory cortex and lateral amygdala increased after salicylate treatment. SFR of lateral amygdala decreased after intraperitoneal injection of diazepam. Microinjecting diazepam to lateral amygdala decreased SFR of the auditory cortex ipsilaterally and contralaterally.

  6. Estrogenic modulation of auditory processing: a vertebrate comparison

    PubMed Central

    Caras, Melissa L.

    2013-01-01

    Sex-steroid hormones are well-known regulators of vocal motor behavior in several organisms. A large body of evidence now indicates that these same hormones modulate processing at multiple levels of the ascending auditory pathway. The goal of this review is to provide a comparative analysis of the role of estrogens in vertebrate auditory function. Four major conclusions can be drawn from the literature: First, estrogens may influence the development of the mammalian auditory system. Second, estrogenic signaling protects the mammalian auditory system from noise- and age-related damage. Third, estrogens optimize auditory processing during periods of reproductive readiness in multiple vertebrate lineages. Finally, brain-derived estrogens can act locally to enhance auditory response properties in at least one avian species. This comparative examination may lead to a better appreciation of the role of estrogens in the processing of natural vocalizations and may provide useful insights toward alleviating auditory dysfunctions emanating from hormonal imbalances. PMID:23911849

  7. Reduced auditory processing capacity during vocalization in children with Selective Mutism.

    PubMed

    Arie, Miri; Henkin, Yael; Lamy, Dominique; Tetin-Schneider, Simona; Apter, Alan; Sadeh, Avi; Bar-Haim, Yair

    2007-02-01

    Because abnormal Auditory Efferent Activity (AEA) is associated with auditory distortions during vocalization, we tested whether auditory processing is impaired during vocalization in children with Selective Mutism (SM). Participants were children with SM and abnormal AEA, children with SM and normal AEA, and normally speaking controls, who had to detect aurally presented target words embedded within word lists under two conditions: silence (single task), and while vocalizing (dual task). To ascertain specificity of auditory-vocal deficit, effects of concurrent vocalizing were also examined during a visual task. Children with SM and abnormal AEA showed impaired auditory processing during vocalization relative to children with SM and normal AEA, and relative to control children. This impairment is specific to the auditory modality and does not reflect difficulties in dual task per se. The data extends previous findings suggesting that deficient auditory processing is involved in speech selectivity in SM.

  8. A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion

    PubMed Central

    Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-01-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322

  9. The importance of laughing in your face: influences of visual laughter on auditory laughter perception.

    PubMed

    Jordan, Timothy R; Abedipour, Lily

    2010-01-01

    Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.

  10. Neural mechanisms underlying auditory feedback control of speech

    PubMed Central

    Reilly, Kevin J.; Guenther, Frank H.

    2013-01-01

    The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech, and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 135 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech. PMID:18035557

  11. Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.

    PubMed

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne

    2016-12-01

    It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Tonic effects of the dopaminergic ventral midbrain on the auditory cortex of awake macaque monkeys.

    PubMed

    Huang, Ying; Mylius, Judith; Scheich, Henning; Brosch, Michael

    2016-03-01

    This study shows that ongoing electrical stimulation of the dopaminergic ventral midbrain can modify neuronal activity in the auditory cortex of awake primates for several seconds. This was reflected in a decrease of the spontaneous firing and in a bidirectional modification of the power of auditory evoked potentials. We consider that both effects are due to an increase in the dopamine tone in auditory cortex induced by the electrical stimulation. Thus, the dopaminergic ventral midbrain may contribute to the tonic activity in auditory cortex that has been proposed to be involved in associating events of auditory tasks (Brosch et al. Hear Res 271:66-73, 2011) and may modulate the signal-to-noise ratio of the responses to auditory stimuli.

  13. Tinnitus Intensity Dependent Gamma Oscillations of the Contralateral Auditory Cortex

    PubMed Central

    van der Loo, Elsa; Gais, Steffen; Congedo, Marco; Vanneste, Sven; Plazier, Mark; Menovsky, Tomas; Van de Heyning, Paul; De Ridder, Dirk

    2009-01-01

    Background Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. Methods and Findings In unilateral tinnitus patients (N = 15; 10 right, 5 left) source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05). Conclusion Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception. PMID:19816597

  14. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    NASA Astrophysics Data System (ADS)

    Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan

    2005-12-01

    Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.

  15. Noise-induced tinnitus: auditory evoked potential in symptomatic and asymptomatic patients.

    PubMed

    Santos-Filha, Valdete Alves Valentins dos; Samelli, Alessandra Giannella; Matas, Carla Gentile

    2014-07-01

    We evaluated the central auditory pathways in workers with noise-induced tinnitus with normal hearing thresholds, compared the auditory brainstem response results in groups with and without tinnitus and correlated the tinnitus location to the auditory brainstem response findings in individuals with a history of occupational noise exposure. Sixty individuals participated in the study and the following procedures were performed: anamnesis, immittance measures, pure-tone air conduction thresholds at all frequencies between 0.25-8 kHz and auditory brainstem response. The mean auditory brainstem response latencies were lower in the Control group than in the Tinnitus group, but no significant differences between the groups were observed. Qualitative analysis showed more alterations in the lower brainstem in the Tinnitus group. The strongest relationship between tinnitus location and auditory brainstem response alterations was detected in individuals with bilateral tinnitus and bilateral auditory brainstem response alterations compared with patients with unilateral alterations. Our findings suggest the occurrence of a possible dysfunction in the central auditory nervous system (brainstem) in individuals with noise-induced tinnitus and a normal hearing threshold.

  16. Auditory processing deficits in individuals with primary open-angle glaucoma.

    PubMed

    Rance, Gary; O'Hare, Fleur; O'Leary, Stephen; Starr, Arnold; Ly, Anna; Cheng, Belinda; Tomlin, Dani; Graydon, Kelley; Chisari, Donella; Trounce, Ian; Crowston, Jonathan

    2012-01-01

    The high energy demand of the auditory and visual pathways render these sensory systems prone to diseases that impair mitochondrial function. Primary open-angle glaucoma, a neurodegenerative disease of the optic nerve, has recently been associated with a spectrum of mitochondrial abnormalities. This study sought to investigate auditory processing in individuals with open-angle glaucoma. DESIGN/STUDY SAMPLE: Twenty-seven subjects with open-angle glaucoma underwent electrophysiologic (auditory brainstem response), auditory temporal processing (amplitude modulation detection), and speech perception (monosyllabic words in quiet and background noise) assessment in each ear. A cohort of age, gender and hearing level matched control subjects was also tested. While the majority of glaucoma subjects in this study demonstrated normal auditory function, there were a significant number (6/27 subjects, 22%) who showed abnormal auditory brainstem responses and impaired auditory perception in one or both ears. The finding that a significant proportion of subjects with open-angle glaucoma presented with auditory dysfunction provides evidence of systemic neuronal susceptibility. Affected individuals may suffer significant communication difficulties in everyday listening situations.

  17. Magical ideation and hyperacusis.

    PubMed

    Dubal, Stéphanie; Viaud-Delmon, Isabelle

    2008-01-01

    The subjective experience conferred by auditory perception has rarely been addressed outside of the studies of auditory hallucinations. The aim of this study is to describe the phenomenology of auditory experiences in individuals who endorse magical beliefs, but do not report hallucinations. We examined the relationship between subjective auditory sensitivity and a 'psychotic-like' thinking style. Hyperacusis questionnaire scores were compared between 25 high scoring participants on Chapman's magical ideation (MI) scale, 25 high scoring participants on Chapman's physical anhedonia scale and 25 control participants, pre-selected from a large student pool (n=1289). The participants who obtained high scores on the MI scale rated their auditory sensitivity higher than the two other groups. Our results indicate that, in healthy subjects, subjective auditory sensitivity is associated with MI without the mediation by anxiety commonly observed in pathological cases. We propose that hyperacusis associated to high scores of MI may be a predispositional factor to deviant auditory experiences. The relative uncoupling of perception from auditory sensory input may result in a central hypersensitivity, which could play a role in triggering off the experience of auditory hallucinations.

  18. Spatial auditory processing in pinnipeds

    NASA Astrophysics Data System (ADS)

    Holt, Marla M.

    Given the biological importance of sound for a variety of activities, pinnipeds must be able to obtain spatial information about their surroundings thorough acoustic input in the absence of other sensory cues. The three chapters of this dissertation address spatial auditory processing capabilities of pinnipeds in air given that these amphibious animals use acoustic signals for reproduction and survival on land. Two chapters are comparative lab-based studies that utilized psychophysical approaches conducted in an acoustic chamber. Chapter 1 addressed the frequency-dependent sound localization abilities at azimuth of three pinniped species (the harbor seal, Phoca vitulina, the California sea lion, Zalophus californianus, and the northern elephant seal, Mirounga angustirostris). While performances of the sea lion and harbor seal were consistent with the duplex theory of sound localization, the elephant seal, a low-frequency hearing specialist, showed a decreased ability to localize the highest frequencies tested. In Chapter 2 spatial release from masking (SRM), which occurs when a signal and masker are spatially separated resulting in improvement in signal detectability relative to conditions in which they are co-located, was determined in a harbor seal and sea lion. Absolute and masked thresholds were measured at three frequencies and azimuths to determine the detection advantages afforded by this type of spatial auditory processing. Results showed that hearing sensitivity was enhanced by up to 19 and 12 dB in the harbor seal and sea lion, respectively, when the signal and masker were spatially separated. Chapter 3 was a field-based study that quantified both sender and receiver variables of the directional properties of male northern elephant seal calls produce within communication system that serves to delineate dominance status. This included measuring call directivity patterns, observing male-male vocally-mediated interactions, and an acoustic playback study. Results showed that males produce calls that were highly directional that together with social status influenced the response of receivers. Results from the playback study were able to confirm that the isolated acoustic components of this display resulted in similar responses among males. These three chapters provide further information about comparative aspects of spatial auditory processing in pinnipeds.

  19. Behavioral Indications of Auditory Processing Disorders.

    ERIC Educational Resources Information Center

    Hartman, Kerry McGoldrick

    1988-01-01

    Identifies disruptive behaviors of children that may indicate central auditory processing disorders (CAPDs), perceptual handicaps of auditory discrimination or auditory memory not related to hearing ability. Outlines steps to modify the communication environment for CAPD children at home and in the classroom. (SV)

  20. Human engineer's guide to auditory displays. Volume 1. Elements of perception and memory affecting auditory displays

    NASA Astrophysics Data System (ADS)

    Mulligan, B. E.; Goodman, L. S.; McBride, D. K.; Mitchell, T. M.; Crosby, T. N.

    1984-08-01

    This work reviews the areas of auditory attention, recognition, memory and auditory perception of patterns, pitch, and loudness. The review was written from the perspective of human engineering and focuses primarily on auditory processing of information contained in acoustic signals. The impetus for this effort was to establish a data base to be utilized in the design and evaluation of acoustic displays.

  1. [Auditory processing and high frequency audiometry in students of São Paulo].

    PubMed

    Ramos, Cristina Silveira; Pereira, Liliane Desgualdo

    2005-01-01

    Auditory processing and auditory sensibility to high Frequency sounds. To characterize the localization processes, temporal ordering, hearing patterns and detection of high frequency sounds, looking for possible relations between these factors. 32 hearing fourth grade students, born in city of São Paulo, were submitted to: a simplified evaluation of the auditory processing; duration pattern test; high frequency audiometry. Three (9,4%) individuals presented auditory processing disorder (APD) and in one of them there was the coexistence of lower hearing thresholds in high frequency audiometry. APD associated to an auditory sensibility loss in high frequencies should be further investigated.

  2. Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity.

    PubMed

    Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou

    2018-01-01

    Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.

  3. Grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents.

    PubMed

    Li, Wenjing; Li, Jianhong; Wang, Zhenchang; Li, Yong; Liu, Zhaohui; Yan, Fei; Xian, Junfang; He, Huiguang

    2015-01-01

    Previous studies have shown brain reorganizations after early deprivation of auditory sensory. However, changes of grey matter connectivity have not been investigated in prelingually deaf adolescents yet. In the present study, we aimed to investigate changes of grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents. We recruited 16 prelingually deaf adolescents and 16 age-and gender-matched normal controls, and extracted the grey matter volume as the structural characteristic from 14 regions of interest involved in auditory, language or visual processing to investigate the changes of grey matter connectivity within and between auditory, language and visual systems. Sparse inverse covariance estimation (SICE) was utilized to construct grey matter connectivity between these brain regions. The results show that prelingually deaf adolescents present weaker grey matter connectivity within auditory and visual systems, and connectivity between language and visual systems declined. Notably, significantly increased brain connectivity was found between auditory and visual systems in prelingually deaf adolescents. Our results indicate "cross-modal" plasticity after deprivation of the auditory input in prelingually deaf adolescents, especially between auditory and visual systems. Besides, auditory deprivation and visual deficits might affect the connectivity pattern within language and visual systems in prelingually deaf adolescents.

  4. Information flow in the auditory cortical network

    PubMed Central

    Hackett, Troy A.

    2011-01-01

    Auditory processing in the cerebral cortex is comprised of an interconnected network of auditory and auditory-related areas distributed throughout the forebrain. The nexus of auditory activity is located in temporal cortex among several specialized areas, or fields, that receive dense inputs from the medial geniculate complex. These areas are collectively referred to as auditory cortex. Auditory activity is extended beyond auditory cortex via connections with auditory-related areas elsewhere in the cortex. Within this network, information flows between areas to and from countless targets, but in a manner that is characterized by orderly regional, areal and laminar patterns. These patterns reflect some of the structural constraints that passively govern the flow of information at all levels of the network. In addition, the exchange of information within these circuits is dynamically regulated by intrinsic neurochemical properties of projecting neurons and their targets. This article begins with an overview of the principal circuits and how each is related to information flow along major axes of the network. The discussion then turns to a description of neurochemical gradients along these axes, highlighting recent work on glutamate transporters in the thalamocortical projections to auditory cortex. The article concludes with a brief discussion of relevant neurophysiological findings as they relate to structural gradients in the network. PMID:20116421

  5. Survival, migration, and differentiation of Sox1-GFP embryonic stem cells in coculture with an auditory brainstem slice preparation.

    PubMed

    Glavaski-Joksimovic, Aleksandra; Thonabulsombat, Charoensri; Wendt, Malin; Eriksson, Mikael; Palmgren, Björn; Jonsson, Anna; Olivius, Petri

    2008-03-01

    The poor regeneration capability of the mammalian hearing organ has initiated different approaches to enhance its functionality after injury. To evaluate a potential neuronal repair paradigm in the inner ear and cochlear nerve we have previously used embryonic neuronal tissue and stem cells for implantation in vivo and in vitro. At present, we have used in vitro techniques to study the survival and differentiation of Sox1-green fluorescent protein (GFP) mouse embryonic stem (ES) cells as a monoculture or as a coculture with rat auditory brainstem slices. For the coculture, 300 microm-thick brainstem slices encompassing the cochlear nucleus and cochlear nerve were prepared from postnatal SD rats. The slices were propagated using the membrane interface method and the cochlear nuclei were prelabeled with DiI. After some days in culture a suspension of Sox1 cells was deposited next to the brainstem slice. Following deposition Sox1 cells migrated toward the brainstem and onto the cochlear nucleus. GFP was not detectable in undifferentiated ES cells but became evident during neural differentiation. Up to 2 weeks after transplantation the cocultures were fixed. The undifferentiated cells were evaluated with antibodies against progenitor cells whereas the differentiated cells were determined with neuronal and glial markers. The morphological and immunohistochemical data indicated that Sox1 cells in monoculture differentiated into a higher percentage of glial cells than neurons. However, when a coculture was used a significantly lower percentage of Sox1 cells differentiated into glial cells. The results demonstrate that a coculture of Sox1 cells and auditory brainstem present a useful model to study stem cell differentiation.

  6. Comparison of Pre-Attentive Auditory Discrimination at Gross and Fine Difference between Auditory Stimuli.

    PubMed

    Sanju, Himanshu Kumar; Kumar, Prawin

    2016-10-01

    Introduction  Mismatch Negativity is a negative component of the event-related potential (ERP) elicited by any discriminable changes in auditory stimulation. Objective  The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method  Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN) with pair of stimuli (pure tones), using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result  Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA) showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve) in both conditions. Conclusion  The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz) and gross (1000 Hz and 1100 Hz) difference in auditory stimuli at a higher level (endogenous) of the auditory system.

  7. A Brain System for Auditory Working Memory.

    PubMed

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  8. Estimating subglottal pressure via airflow interruption with auditory masking.

    PubMed

    Hoffman, Matthew R; Jiang, Jack J

    2009-11-01

    Current noninvasive measurement of subglottal pressure using airflow interruption often produces inconsistent results due to the elicitation of audio-laryngeal reflexes. Auditory feedback could be considered as a means of ensuring measurement accuracy and precision. The purpose of this study was to determine if auditory masking could be used with the airflow interruption system to improve intrasubject consistency. A prerecorded sample of subject phonation was played on a loop over headphones during the trials with auditory masking. This provided subjects with a target pitch and blocked out distracting ambient noise created by the airflow interrupter. Subglottal pressure was noninvasively measured using the airflow interruption system. Thirty subjects, divided into two equal groups, performed 10 trials without auditory masking and 10 trials with auditory masking. Group one performed the normal trials first, followed by the trials with auditory masking. Group two performed the auditory masking trials first, followed by the normal trials. Intrasubject consistency was improved by adding auditory masking, resulting in a decrease in average intrasubject standard deviation from 0.93+/-0.51 to 0.47+/-0.22 cm H(2)O (P < 0.001). Auditory masking can be used effectively to combat audio-laryngeal reflexes and aid subjects in maintaining constant glottal configuration and frequency, thereby increasing intrasubject consistency when measuring subglottal pressure. By considering auditory feedback, a more reliable method of measurement was developed. This method could be used by clinicians, as reliable, immediately available values of subglottal pressure are useful in evaluating laryngeal health and monitoring treatment progress.

  9. Involvement of the human midbrain and thalamus in auditory deviance detection.

    PubMed

    Cacciaglia, Raffaele; Escera, Carles; Slabu, Lavinia; Grimm, Sabine; Sanjuán, Ana; Ventura-Campos, Noelia; Ávila, César

    2015-02-01

    Prompt detection of unexpected changes in the sensory environment is critical for survival. In the auditory domain, the occurrence of a rare stimulus triggers a cascade of neurophysiological events spanning over multiple time-scales. Besides the role of the mismatch negativity (MMN), whose cortical generators are located in supratemporal areas, cumulative evidence suggests that violations of auditory regularities can be detected earlier and lower in the auditory hierarchy. Recent human scalp recordings have shown signatures of auditory mismatch responses at shorter latencies than those of the MMN. Moreover, animal single-unit recordings have demonstrated that rare stimulus changes cause a release from stimulus-specific adaptation in neurons of the primary auditory cortex, the medial geniculate body (MGB), and the inferior colliculus (IC). Although these data suggest that change detection is a pervasive property of the auditory system which may reside upstream cortical sites, direct evidence for the involvement of subcortical stages in the human auditory novelty system is lacking. Using event-related functional magnetic resonance imaging during a frequency oddball paradigm, we here report that auditory deviance detection occurs in the MGB and the IC of healthy human participants. By implementing a random condition controlling for neural refractoriness effects, we show that auditory change detection in these subcortical stations involves the encoding of statistical regularities from the acoustic input. These results provide the first direct evidence of the existence of multiple mismatch detectors nested at different levels along the human ascending auditory pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  11. Presynaptic Neuronal Nicotinic Receptors Differentially Shape Select Inputs to Auditory Thalamus and Are Negatively Impacted by Aging.

    PubMed

    Sottile, Sarah Y; Hackett, Troy A; Cai, Rui; Ling, Lynne; Llano, Daniel A; Caspary, Donald M

    2017-11-22

    Acetylcholine (ACh) is a potent neuromodulator capable of modifying patterns of acoustic information flow. In auditory cortex, cholinergic systems have been shown to increase salience/gain while suppressing extraneous information. However, the mechanism by which cholinergic circuits shape signal processing in the auditory thalamus (medial geniculate body, MGB) is poorly understood. The present study, in male Fischer Brown Norway rats, seeks to determine the location and function of presynaptic neuronal nicotinic ACh receptors (nAChRs) at the major inputs to MGB and characterize how nAChRs change during aging. In vitro electrophysiological/optogenetic methods were used to examine responses of MGB neurons after activation of nAChRs during a paired-pulse paradigm. Presynaptic nAChR activation increased responses evoked by stimulation of excitatory corticothalamic and inhibitory tectothalamic terminals. Conversely, nAChR activation appeared to have little effect on evoked responses from inhibitory thalamic reticular nucleus and excitatory tectothalamic terminals. In situ hybridization data showed nAChR subunit transcripts in GABAergic inferior colliculus neurons and glutamatergic auditory cortical neurons supporting the present slice findings. Responses to nAChR activation at excitatory corticothalamic and inhibitory tectothalamic inputs were diminished by aging. These findings suggest that cholinergic input to the MGB increases the strength of tectothalamic inhibitory projections, potentially improving the signal-to-noise ratio and signal detection while increasing corticothalamic gain, which may facilitate top-down identification of stimulus identity. These mechanisms appear to be affected negatively by aging, potentially diminishing speech perception in noisy environments. Cholinergic inputs to the MGB appear to maximize sensory processing by adjusting both top-down and bottom-up mechanisms in conditions of attention and arousal. SIGNIFICANCE STATEMENT The pedunculopontine tegmental nucleus is the source of cholinergic innervation for sensory thalamus and is a critical part of an ascending arousal system that controls the firing mode of thalamic cells based on attentional demand. The present study describes the location and impact of aging on presynaptic neuronal nicotinic acetylcholine receptors (nAChRs) within the circuitry of the auditory thalamus (medial geniculate body, MGB). We show that nAChRs are located on ascending inhibitory and descending excitatory presynaptic inputs onto MGB neurons, likely increasing gain selectively and improving temporal clarity. In addition, we show that aging has a deleterious effect on nAChR efficacy. Cholinergic dysfunction at the level of MGB may affect speech understanding negatively in the elderly population. Copyright © 2017 the authors 0270-6474/17/3711378-13$15.00/0.

  12. Presynaptic Neuronal Nicotinic Receptors Differentially Shape Select Inputs to Auditory Thalamus and Are Negatively Impacted by Aging

    PubMed Central

    Sottile, Sarah Y.; Hackett, Troy A.

    2017-01-01

    Acetylcholine (ACh) is a potent neuromodulator capable of modifying patterns of acoustic information flow. In auditory cortex, cholinergic systems have been shown to increase salience/gain while suppressing extraneous information. However, the mechanism by which cholinergic circuits shape signal processing in the auditory thalamus (medial geniculate body, MGB) is poorly understood. The present study, in male Fischer Brown Norway rats, seeks to determine the location and function of presynaptic neuronal nicotinic ACh receptors (nAChRs) at the major inputs to MGB and characterize how nAChRs change during aging. In vitro electrophysiological/optogenetic methods were used to examine responses of MGB neurons after activation of nAChRs during a paired-pulse paradigm. Presynaptic nAChR activation increased responses evoked by stimulation of excitatory corticothalamic and inhibitory tectothalamic terminals. Conversely, nAChR activation appeared to have little effect on evoked responses from inhibitory thalamic reticular nucleus and excitatory tectothalamic terminals. In situ hybridization data showed nAChR subunit transcripts in GABAergic inferior colliculus neurons and glutamatergic auditory cortical neurons supporting the present slice findings. Responses to nAChR activation at excitatory corticothalamic and inhibitory tectothalamic inputs were diminished by aging. These findings suggest that cholinergic input to the MGB increases the strength of tectothalamic inhibitory projections, potentially improving the signal-to-noise ratio and signal detection while increasing corticothalamic gain, which may facilitate top-down identification of stimulus identity. These mechanisms appear to be affected negatively by aging, potentially diminishing speech perception in noisy environments. Cholinergic inputs to the MGB appear to maximize sensory processing by adjusting both top-down and bottom-up mechanisms in conditions of attention and arousal. SIGNIFICANCE STATEMENT The pedunculopontine tegmental nucleus is the source of cholinergic innervation for sensory thalamus and is a critical part of an ascending arousal system that controls the firing mode of thalamic cells based on attentional demand. The present study describes the location and impact of aging on presynaptic neuronal nicotinic acetylcholine receptors (nAChRs) within the circuitry of the auditory thalamus (medial geniculate body, MGB). We show that nAChRs are located on ascending inhibitory and descending excitatory presynaptic inputs onto MGB neurons, likely increasing gain selectively and improving temporal clarity. In addition, we show that aging has a deleterious effect on nAChR efficacy. Cholinergic dysfunction at the level of MGB may affect speech understanding negatively in the elderly population. PMID:29061702

  13. Visual and auditory perception in preschool children at risk for dyslexia.

    PubMed

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. The effects of early auditory-based intervention on adult bilateral cochlear implant outcomes.

    PubMed

    Lim, Stacey R

    2017-09-01

    The goal of this exploratory study was to determine the types of improvement that sequentially implanted auditory-verbal and auditory-oral adults with prelingual and childhood hearing loss received in bilateral listening conditions, compared to their best unilateral listening condition. Five auditory-verbal adults and five auditory-oral adults were recruited for this study. Participants were seated in the center of a 6-loudspeaker array. BKB-SIN sentences were presented from 0° azimuth, while multi-talker babble was presented from various loudspeakers. BKB-SIN scores in bilateral and the best unilateral listening conditions were compared to determine the amount of improvement gained. As a group, the participants had improved speech understanding scores in the bilateral listening condition. Although not statistically significant, the auditory-verbal group tended to have greater speech understanding with greater levels of competing background noise, compared to the auditory-oral participants. Bilateral cochlear implantation provides individuals with prelingual and childhood hearing loss with improved speech understanding in noise. A higher emphasis on auditory development during the critical language development years may add to increased speech understanding in adulthood. However, other demographic factors such as age or device characteristics must also be considered. Although both auditory-verbal and auditory-oral approaches emphasize spoken language development, they emphasize auditory development to different degrees. This may affect cochlear implant (CI) outcomes. Further consideration should be made in future auditory research to determine whether these differences contribute to performance outcomes. Additional investigation with a larger participant pool, controlled for effects of age and CI devices and processing strategies, would be necessary to determine whether language learning approaches are associated with different levels of speech understanding performance.

  15. Increased Early Processing of Task-Irrelevant Auditory Stimuli in Older Adults

    PubMed Central

    Tusch, Erich S.; Alperin, Brittany R.; Holcomb, Phillip J.; Daffner, Kirk R.

    2016-01-01

    The inhibitory deficit hypothesis of cognitive aging posits that older adults’ inability to adequately suppress processing of irrelevant information is a major source of cognitive decline. Prior research has demonstrated that in response to task-irrelevant auditory stimuli there is an age-associated increase in the amplitude of the N1 wave, an ERP marker of early perceptual processing. Here, we tested predictions derived from the inhibitory deficit hypothesis that the age-related increase in N1 would be 1) observed under an auditory-ignore, but not auditory-attend condition, 2) attenuated in individuals with high executive capacity (EC), and 3) augmented by increasing cognitive load of the primary visual task. ERPs were measured in 114 well-matched young, middle-aged, young-old, and old-old adults, designated as having high or average EC based on neuropsychological testing. Under the auditory-ignore (visual-attend) task, participants ignored auditory stimuli and responded to rare target letters under low and high load. Under the auditory-attend task, participants ignored visual stimuli and responded to rare target tones. Results confirmed an age-associated increase in N1 amplitude to auditory stimuli under the auditory-ignore but not auditory-attend task. Contrary to predictions, EC did not modulate the N1 response. The load effect was the opposite of expectation: the N1 to task-irrelevant auditory events was smaller under high load. Finally, older adults did not simply fail to suppress the N1 to auditory stimuli in the task-irrelevant modality; they generated a larger response than to identical stimuli in the task-relevant modality. In summary, several of the study’s findings do not fit the inhibitory-deficit hypothesis of cognitive aging, which may need to be refined or supplemented by alternative accounts. PMID:27806081

  16. Proceedings of the Lake Wilderness Attention Conference Held at Seattle Washington, 22-24 September 1980.

    DTIC Science & Technology

    1981-07-10

    Pohlmann, L. D. Some models of observer behavior in two-channel auditory signal detection. Perception and Psychophy- sics, 1973, 14, 101-109. Spelke...spatial), and processing modalities ( auditory versus visual input, vocal versus manual response). If validated, this configuration has both theoretical...conclusion that auditory and visual processes will compete, as will spatial and verbal (albeit to a lesser extent than auditory - auditory , visual-visual

  17. Early human communication helps in understanding language evolution.

    PubMed

    Lenti Boero, Daniela

    2014-12-01

    Building a theory on extant species, as Ackermann et al. do, is a useful contribution to the field of language evolution. Here, I add another living model that might be of interest: human language ontogeny in the first year of life. A better knowledge of this phase might help in understanding two more topics among the "several building blocks of a comprehensive theory of the evolution of spoken language" indicated in their conclusion by Ackermann et al., that is, the foundation of the co-evolution of linguistic motor skills with the auditory skills underlying speech perception, and the possible phylogenetic interactions of protospeech production with referential capabilities.

  18. Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System

    PubMed Central

    Anderson, Lucy A.

    2016-01-01

    High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the findings suggest that auditory temporal processing deficits, such as impairments in gap-in-noise detection, could arise from reduced brain sensitivity to sound offsets alone. PMID:26865621

  19. Sensitivity and specificity of auditory steady‐state response testing

    PubMed Central

    Rabelo, Camila Maia; Schochat, Eliane

    2011-01-01

    INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady‐state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady‐state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz). The difference between auditory steady‐state response‐estimated thresholds and behavioral thresholds (audiometric evaluation) was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady‐state response‐estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS) lesions has shown that individuals with CANS lesions present a greater difference between ASSR‐estimated thresholds and actual behavioral thresholds; ASSR‐estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR‐estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady‐state response testing was lower than its overall specificity. Although the overall specificity was high, it was lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. Overall sensitivity was also lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. PMID:21437442

  20. Use of auditory evoked potentials for intra-operative awareness in anesthesia: a consciousness-based conceptual model.

    PubMed

    Dong, Xuebao; Suo, Puxia; Yuan, Xin; Yao, Xuefeng

    2015-01-01

    Auditory evoked potentials (AEPs) have been used as a measure of the depth of anesthesia during the intra-operative process. AEPs are classically divided, on the basis of their latency, into first, fast, middle, slow, and late components. The use of auditory evoked potential has been advocated for the assessment of Intra-operative awareness (IOA), but has not been considered seriously enough to universalize it. It is because we have not explored enough the impact of auditory perception and auditory processing on the IOA phenomena as well as on the subsequent psychological impact of IOA on the patient. More importantly, we have seldom tried to look at the phenomena of IOP from the perspective of consciousness itself. This perspective is especially important because many of IOA phenomena exist in the subconscious domain than they do in the conscious domain of explicit recall. Two important forms of these subconscious manifestations of IOA are the implicit recall phenomena and post-operative dreams related to the operation. Here, we present an integrated auditory consciousness-based model of IOA. We start with a brief description of auditory awareness and the factors affecting it. Further, we proceed to the evaluation of conscious and subconscious information processing by auditory modality and how they interact during and after intra-operative period. Further, we show that both conscious and subconscious auditory processing affect the IOA experience and both have serious psychological implications on the patient subsequently. These effects could be prevented by using auditory evoked potential during monitoring of anesthesia, especially the mid-latency auditory evoked potentials (MLAERs). To conclude our model with present hypothesis, we propose that the use of auditory evoked potential should be universal with general anesthesia use in order to prevent the occurrences of distressing outcomes resulting from both conscious and subconscious auditory processing during anesthesia.

  1. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    PubMed Central

    Zaltz, Yael; Globerson, Eitan; Amir, Noam

    2017-01-01

    The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination, demonstrating highly specific effects for auditory linguistic experience as well. Overall, results suggest that auditory superiority is associated with the specific auditory exposure. PMID:29238318

  2. Pre-Attentive Auditory Processing of Lexicality

    ERIC Educational Resources Information Center

    Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan

    2004-01-01

    The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…

  3. A corollary discharge maintains auditory sensitivity during sound production

    NASA Astrophysics Data System (ADS)

    Poulet, James F. A.; Hedwig, Berthold

    2002-08-01

    Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.

  4. Neural coding strategies in auditory cortex.

    PubMed

    Wang, Xiaoqin

    2007-07-01

    In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex can be characterized by three types: (1) isomorphic (faithful) representations of acoustic structures; (2) non-isomorphic transformations of acoustic features and (3) transformations from acoustical to perceptual dimensions. The challenge facing auditory neurophysiologists is to understand the nature of the latter two transformations. In this article, I will review recent studies from our laboratory regarding temporal discharge patterns in auditory cortex of awake marmosets and cortical representations of time-varying signals. Findings from these studies show that (1) firing patterns of neurons in auditory cortex are dependent on stimulus optimality and context and (2) the auditory cortex forms internal representations of sounds that are no longer faithful replicas of their acoustic structures.

  5. Encoding of Discriminative Fear Memory by Input-Specific LTP in the Amygdala.

    PubMed

    Kim, Woong Bin; Cho, Jun-Hyeong

    2017-08-30

    In auditory fear conditioning, experimental subjects learn to associate an auditory conditioned stimulus (CS) with an aversive unconditioned stimulus. With sufficient training, animals fear conditioned to an auditory CS show fear response to the CS, but not to irrelevant auditory stimuli. Although long-term potentiation (LTP) in the lateral amygdala (LA) plays an essential role in auditory fear conditioning, it is unknown whether LTP is induced selectively in the neural pathways conveying specific CS information to the LA in discriminative fear learning. Here, we show that postsynaptically expressed LTP is induced selectively in the CS-specific auditory pathways to the LA in a mouse model of auditory discriminative fear conditioning. Moreover, optogenetically induced depotentiation of the CS-specific auditory pathways to the LA suppressed conditioned fear responses to the CS. Our results suggest that input-specific LTP in the LA contributes to fear memory specificity, enabling adaptive fear responses only to the relevant sensory cue. VIDEO ABSTRACT. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. A Non-canonical Reticular-Limbic Central Auditory Pathway via Medial Septum Contributes to Fear Conditioning.

    PubMed

    Zhang, Guang-Wei; Sun, Wen-Jian; Zingg, Brian; Shen, Li; He, Jufang; Xiong, Ying; Tao, Huizhong W; Zhang, Li I

    2018-01-17

    In the mammalian brain, auditory information is known to be processed along a central ascending pathway leading to auditory cortex (AC). Whether there exist any major pathways beyond this canonical auditory neuraxis remains unclear. In awake mice, we found that auditory responses in entorhinal cortex (EC) cannot be explained by a previously proposed relay from AC based on response properties. By combining anatomical tracing and optogenetic/pharmacological manipulations, we discovered that EC received auditory input primarily from the medial septum (MS), rather than AC. A previously uncharacterized auditory pathway was then revealed: it branched from the cochlear nucleus, and via caudal pontine reticular nucleus, pontine central gray, and MS, reached EC. Neurons along this non-canonical auditory pathway responded selectively to high-intensity broadband noise, but not pure tones. Disruption of the pathway resulted in an impairment of specifically noise-cued fear conditioning. This reticular-limbic pathway may thus function in processing aversive acoustic signals. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Incorporating Auditory Models in Speech/Audio Applications

    NASA Astrophysics Data System (ADS)

    Krishnamoorthi, Harish

    2011-12-01

    Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.

  8. Comparing the effect of auditory-only and auditory-visual modes in two groups of Persian children using cochlear implants: a randomized clinical trial.

    PubMed

    Oryadi Zanjani, Mohammad Majid; Hasanzadeh, Saeid; Rahgozar, Mehdi; Shemshadi, Hashem; Purdy, Suzanne C; Mahmudi Bakhtiari, Behrooz; Vahab, Maryam

    2013-09-01

    Since the introduction of cochlear implantation, researchers have considered children's communication and educational success before and after implantation. Therefore, the present study aimed to compare auditory, speech, and language development scores following one-sided cochlear implantation between two groups of prelingual deaf children educated through either auditory-only (unisensory) or auditory-visual (bisensory) modes. A randomized controlled trial with a single-factor experimental design was used. The study was conducted in the Instruction and Rehabilitation Private Centre of Hearing Impaired Children and their Family, called Soroosh in Shiraz, Iran. We assessed 30 Persian deaf children for eligibility and 22 children qualified to enter the study. They were aged between 27 and 66 months old and had been implanted between the ages of 15 and 63 months. The sample of 22 children was randomly assigned to two groups: auditory-only mode and auditory-visual mode; 11 participants in each group were analyzed. In both groups, the development of auditory perception, receptive language, expressive language, speech, and speech intelligibility was assessed pre- and post-intervention by means of instruments which were validated and standardized in the Persian population. No significant differences were found between the two groups. The children with cochlear implants who had been instructed using either the auditory-only or auditory-visual modes acquired auditory, receptive language, expressive language, and speech skills at the same rate. Overall, spoken language significantly developed in both the unisensory group and the bisensory group. Thus, both the auditory-only mode and the auditory-visual mode were effective. Therefore, it is not essential to limit access to the visual modality and to rely solely on the auditory modality when instructing hearing, language, and speech in children with cochlear implants who are exposed to spoken language both at home and at school when communicating with their parents and educators prior to and after implantation. The trial has been registered at IRCT.ir, number IRCT201109267637N1. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Auditory training improves auditory performance in cochlear implanted children.

    PubMed

    Roman, Stephane; Rochette, Françoise; Triglia, Jean-Michel; Schön, Daniele; Bigand, Emmanuel

    2016-07-01

    While the positive benefits of pediatric cochlear implantation on language perception skills are now proven, the heterogeneity of outcomes remains high. The understanding of this heterogeneity and possible strategies to minimize it is of utmost importance. Our scope here is to test the effects of an auditory training strategy, "sound in Hands", using playful tasks grounded on the theoretical and empirical findings of cognitive sciences. Indeed, several basic auditory operations, such as auditory scene analysis (ASA) are not trained in the usual therapeutic interventions in deaf children. However, as they constitute a fundamental basis in auditory cognition, their development should imply general benefit in auditory processing and in turn enhance speech perception. The purpose of the present study was to determine whether cochlear implanted children could improve auditory performances in trained tasks and whether they could develop a transfer of learning to a phonetic discrimination test. Nineteen prelingually unilateral cochlear implanted children without additional handicap (4-10 year-olds) were recruited. The four main auditory cognitive processing (identification, discrimination, ASA and auditory memory) were stimulated and trained in the Experimental Group (EG) using Sound in Hands. The EG followed 20 training weekly sessions of 30 min and the untrained group was the control group (CG). Two measures were taken for both groups: before training (T1) and after training (T2). EG showed a significant improvement in the identification, discrimination and auditory memory tasks. The improvement in the ASA task did not reach significance. CG did not show any significant improvement in any of the tasks assessed. Most importantly, improvement was visible in the phonetic discrimination test for EG only. Moreover, younger children benefited more from the auditory training program to develop their phonetic abilities compared to older children, supporting the idea that rehabilitative care is most efficient when it takes place early on during childhood. These results are important to pinpoint the auditory deficits in CI children, to gather a better understanding of the links between basic auditory skills and speech perception which will in turn allow more efficient rehabilitative programs. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. The role of auditory and kinaesthetic feedback mechanisms on phonatory stability in children.

    PubMed

    Rathna Kumar, S B; Azeem, Suhail; Choudhary, Abhishek Kumar; Prakash, S G R

    2013-12-01

    Auditory feedback plays an important role in phonatory control. When auditory feedback is disrupted, various changes are observed in vocal motor control. Vocal intensity and fundamental frequency (F0) levels tend to increase in response to auditory masking. Because of the close reflexive links between the auditory and phonatory systems, it is likely that phonatory stability may be disrupted when auditory feedback is disrupted or altered. However, studies on phonatory stability under auditory masking condition in adult subjects showed that most of the subjects maintained normal levels of phonatory stability. The authors in the earlier investigations suggested that auditory feedback is not the sole contributor to vocal motor control and phonatory stability, a complex neuromuscular reflex system known as kinaesthetic feedback may play a role in controlling phonatory stability when auditory feedback is disrupted or lacking. This proposes the need to further investigate this phenomenon as to whether children show similar patterns of phonatory stability under auditory masking since their neuromotor systems are still at developmental stage, less mature and are less resistant to altered auditory feedback than adults. A total of 40 normal hearing and speaking children (20 male and 20 female) between the age group of 6 and 8 years participated as subjects. The acoustic parameters such as shimmer, jitter and harmonic-to-noise ratio (HNR) were measures and compared between no masking condition (0 dB ML) and masking condition (90 dB ML). Despite the neuromotor systems being less mature in children and less resistant than adults to altered auditory feedback, most of the children in the study demonstrated increased phonatory stability which was reflected by reduced shimmer, jitter and increased HNR values. This study implicates that most of the children demonstrate well established patterns of kinaesthetic feedback, which might have allowed them to maintain normal levels of vocal motor control even in the presence of disturbed auditory feedback. Hence, it can be concluded that children also exhibit kinaesthetic feedback mechanism to control phonatory stability when auditory feedback is disrupted which in turn highlights the importance of kinaesthetic feedback to be included in the therapeutic/intervention approaches for children with hearing and neurogenic speech deficits.

  11. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    PubMed

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Strategy Choice Mediates the Link between Auditory Processing and Spelling

    PubMed Central

    Kwong, Tru E.; Brachman, Kyle J.

    2014-01-01

    Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities. PMID:25198787

  13. Auditory perceptual simulation: Simulating speech rates or accents?

    PubMed

    Zhou, Peiyun; Christianson, Kiel

    2016-07-01

    When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Neural circuits in auditory and audiovisual memory.

    PubMed

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Auditory and visual spatial impression: Recent studies of three auditoria

    NASA Astrophysics Data System (ADS)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  16. The auditory cortex hosts network nodes influential for emotion processing: An fMRI study on music-evoked fear and joy

    PubMed Central

    Skouras, Stavros; Lohmann, Gabriele

    2018-01-01

    Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with “small-world” properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex—and sensory systems in general—in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions. PMID:29385142

  17. Strategy choice mediates the link between auditory processing and spelling.

    PubMed

    Kwong, Tru E; Brachman, Kyle J

    2014-01-01

    Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.

  18. Differential sensory cortical involvement in auditory and visual sensorimotor temporal recalibration: Evidence from transcranial direct current stimulation (tDCS).

    PubMed

    Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk

    2017-02-01

    Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Evaluation of an imputed pitch velocity model of the auditory tau effect.

    PubMed

    Henry, Molly J; McAuley, J Devin; Zaleha, Marta

    2009-08-01

    This article extends an imputed pitch velocity model of the auditory kappa effect proposed by Henry and McAuley (2009a) to the auditory tau effect. Two experiments were conducted using an AXB design in which listeners judged the relative pitch of a middle target tone (X) in ascending and descending three-tone sequences. In Experiment 1, sequences were isochronous, establishing constant fast, medium, and slow velocity conditions. No systematic distortions in perceived target pitch were observed, and thresholds were similar across velocity conditions. Experiment 2 introduced to-be-ignored variations in target timing. Variations in target timing that deviated from constant velocity conditions introduced systematic distortions in perceived target pitch, indicative of a robust auditory tau effect. Consistent with an auditory motion hypothesis, the magnitude of the tau effect was larger at faster velocities. In addition, the tau effect was generally stronger for descending sequences than for ascending sequences. Combined with previous work on the auditory kappa effect, the imputed velocity model and associated auditory motion hypothesis provide a unified quantitative account of both auditory tau and kappa effects. In broader terms, these findings add support to the view that pitch and time relations in auditory patterns are fundamentally interdependent.

  20. Inservice Training Packet: Auditory Discrimination Listening Skills.

    ERIC Educational Resources Information Center

    Florida Learning Resources System/CROWN, Jacksonville.

    Intended to be used as the basis for a brief inservice workshop, the auditory discrimination/listening skills packet provides information on ideas, materials, and resources for remediating auditory discrimination and listening skill deficits. Included are a sample prescription form, tests of auditory discrimination, and a list of auditory…

  1. Present and past: Can writing abilities in school children be associated with their auditory discrimination capacities in infancy?

    PubMed

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Friederici, Angela D

    2015-12-01

    Literacy acquisition is highly associated with auditory processing abilities, such as auditory discrimination. The event-related potential Mismatch Response (MMR) is an indicator for cortical auditory discrimination abilities and it has been found to be reduced in individuals with reading and writing impairments and also in infants at risk for these impairments. The goal of the present study was to analyze the relationship between auditory speech discrimination in infancy and writing abilities at school age within subjects, and to determine when auditory speech discrimination differences, relevant for later writing abilities, start to develop. We analyzed the MMR registered in response to natural syllables in German children with and without writing problems at two points during development, that is, at school age and at infancy, namely at age 1 month and 5 months. We observed MMR related auditory discrimination differences between infants with and without later writing problems, starting to develop at age 5 months-an age when infants begin to establish language-specific phoneme representations. At school age, these children with and without writing problems also showed auditory discrimination differences, reflected in the MMR, confirming a relationship between writing and auditory speech processing skills. Thus, writing problems at school age are, at least, partly grounded in auditory discrimination problems developing already during the first months of life. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Effect of neonatal asphyxia on the impairment of the auditory pathway by recording auditory brainstem responses in newborn piglets: a new experimentation model to study the perinatal hypoxic-ischemic damage on the auditory system.

    PubMed

    Alvarez, Francisco Jose; Revuelta, Miren; Santaolalla, Francisco; Alvarez, Antonia; Lafuente, Hector; Arteaga, Olatz; Alonso-Alconada, Daniel; Sanchez-del-Rey, Ana; Hilario, Enrique; Martinez-Ibargüen, Agustin

    2015-01-01

    Hypoxia-ischemia (HI) is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets. Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs) of newborn piglets exposed to acute hypoxia/ischemia (n = 6) and a control group with no such exposure (n = 10). ABRs were recorded for both ears before the start of the experiment (baseline), after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury. Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant. The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway.

  3. Modality-dependent effect of motion information in sensory-motor synchronised tapping.

    PubMed

    Ono, Kentaro

    2018-05-14

    Synchronised action is important for everyday life. Generally, the auditory domain is more sensitive for coding temporal information, and previous studies have shown that auditory-motor synchronisation is much more precise than visuo-motor synchronisation. Interestingly, adding motion information improves synchronisation with visual stimuli and the advantage of the auditory modality seems to diminish. However, whether adding motion information also improves auditory-motor synchronisation remains unknown. This study compared tapping accuracy with a stationary or moving stimulus in both auditory and visual modalities. Participants were instructed to tap in synchrony with the onset of a sound or flash in the stationary condition, while these stimuli were perceived as moving from side to side in the motion condition. The results demonstrated that synchronised tapping with a moving visual stimulus was significantly more accurate than tapping with a stationary visual stimulus, as previous studies have shown. However, tapping with a moving auditory stimulus was significantly poorer than tapping with a stationary auditory stimulus. Although motion information impaired audio-motor synchronisation, an advantage of auditory modality compared to visual modality still existed. These findings are likely the result of higher temporal resolution in the auditory domain, which is likely due to the physiological and structural differences in the auditory and visual pathways in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Rapid Effects of Hearing Song on Catecholaminergic Activity in the Songbird Auditory Pathway

    PubMed Central

    Matragrano, Lisa L.; Beaulieu, Michaël; Phillip, Jessica O.; Rae, Ali I.; Sanford, Sara E.; Sockman, Keith W.; Maney, Donna L.

    2012-01-01

    Catecholaminergic (CA) neurons innervate sensory areas and affect the processing of sensory signals. For example, in birds, CA fibers innervate the auditory pathway at each level, including the midbrain, thalamus, and forebrain. We have shown previously that in female European starlings, CA activity in the auditory forebrain can be enhanced by exposure to attractive male song for one week. It is not known, however, whether hearing song can initiate that activity more rapidly. Here, we exposed estrogen-primed, female white-throated sparrows to conspecific male song and looked for evidence of rapid synthesis of catecholamines in auditory areas. In one hemisphere of the brain, we used immunohistochemistry to detect the phosphorylation of tyrosine hydroxylase (TH), a rate-limiting enzyme in the CA synthetic pathway. We found that immunoreactivity for TH phosphorylated at serine 40 increased dramatically in the auditory forebrain, but not the auditory thalamus and midbrain, after 15 min of song exposure. In the other hemisphere, we used high pressure liquid chromatography to measure catecholamines and their metabolites. We found that two dopamine metabolites, dihydroxyphenylacetic acid and homovanillic acid, increased in the auditory forebrain but not the auditory midbrain after 30 min of exposure to conspecific song. Our results are consistent with the hypothesis that exposure to a behaviorally relevant auditory stimulus rapidly induces CA activity, which may play a role in auditory responses. PMID:22724011

  5. Attending to auditory memory.

    PubMed

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Congenital deafness affects deep layers in primary and secondary auditory cortex

    PubMed Central

    Berger, Christoph; Kühne, Daniela; Scheper, Verena

    2017-01-01

    Abstract Congenital deafness leads to functional deficits in the auditory cortex for which early cochlear implantation can effectively compensate. Most of these deficits have been demonstrated functionally. Furthermore, the majority of previous studies on deafness have involved the primary auditory cortex; knowledge of higher‐order areas is limited to effects of cross‐modal reorganization. In this study, we compared the cortical cytoarchitecture of four cortical areas in adult hearing and congenitally deaf cats (CDCs): the primary auditory field A1, two secondary auditory fields, namely the dorsal zone and second auditory field (A2); and a reference visual association field (area 7) in the same section stained either using Nissl or SMI‐32 antibodies. The general cytoarchitectonic pattern and the area‐specific characteristics in the auditory cortex remained unchanged in animals with congenital deafness. Whereas area 7 did not differ between the groups investigated, all auditory fields were slightly thinner in CDCs, this being caused by reduced thickness of layers IV–VI. The study documents that, while the cytoarchitectonic patterns are in general independent of sensory experience, reduced layer thickness is observed in both primary and higher‐order auditory fields in layer IV and infragranular layers. The study demonstrates differences in effects of congenital deafness between supragranular and other cortical layers, but similar dystrophic effects in all investigated auditory fields. PMID:28643417

  7. [Auditory processing evaluation in children born preterm].

    PubMed

    Gallo, Júlia; Dias, Karin Ziliotto; Pereira, Liliane Desgualdo; Azevedo, Marisa Frasson de; Sousa, Elaine Colombo

    2011-01-01

    To verify the performance of children born preterm on auditory processing evaluation, and to correlate the data with behavioral hearing assessment carried out at 12 months of age, comparing the results to those of auditory processing evaluation of children born full-term. Participants were 30 children with ages between 4 and 7 years, who were divided into two groups: Group 1 (children born preterm), and Group 2 (children born full-term). The auditory processing results of Group 1 were correlated to data obtained from the behavioral auditory evaluation carried out at 12 months of age. The results were compared between groups. Subjects in Group 1 presented at least one risk indicator for hearing loss at birth. In the behavioral auditory assessment carried out at 12 months of age, 38% of the children in Group 1 were at risk for central auditory processing deficits, and 93.75% presented auditory processing deficits on the evaluation. Significant differences were found between the groups for the temporal order test, the PSI test with ipsilateral competitive message, and the speech-in-noise test. The delay in sound localization ability was associated to temporal processing deficits. Children born preterm have worse performance in auditory processing evaluation than children born full-term. Delay in sound localization at 12 months is associated to deficits on the physiological mechanism of temporal processing in the auditory processing evaluation carried out between 4 and 7 years.

  8. [Which colours can we hear?: light stimulation of the hearing system].

    PubMed

    Wenzel, G I; Lenarz, T; Schick, B

    2014-02-01

    The success of conventional hearing aids and electrical auditory prostheses for hearing impaired patients is still limited in noisy environments and for sounds more complex than speech (e. g. music). This is partially due to the difficulty of frequency-specific activation of the auditory system using these devices. Stimulation of the auditory system using light pulses represents an alternative to mechanical and electrical stimulation. Light is a source of energy that can be very exactly focused and applied with little scattering, thus offering perspectives for optimal activation of the auditory system. Studies investigating light stimulation of sectors along the auditory pathway have shown stimulation of the auditory system is possible using light pulses. However, further studies and developments are needed before a new generation of light stimulation-based auditory prostheses can be made available for clinical application.

  9. Auditory sequence analysis and phonological skill

    PubMed Central

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E.; Turton, Stuart; Griffiths, Timothy D.

    2012-01-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  10. [Low level auditory skills compared to writing skills in school children attending third and fourth grade: evidence for the rapid auditory processing deficit theory?].

    PubMed

    Ptok, M; Meisen, R

    2008-01-01

    The rapid auditory processing defi-cit theory holds that impaired reading/writing skills are not caused exclusively by a cognitive deficit specific to representation and processing of speech sounds but arise due to sensory, mainly auditory, deficits. To further explore this theory we compared different measures of auditory low level skills to writing skills in school children. prospective study. School children attending third and fourth grade. just noticeable differences for intensity and frequency (JNDI, JNDF), gap detection (GD) monaural and binaural temporal order judgement (TOJb and TOJm); grade in writing, language and mathematics. correlation analysis. No relevant correlation was found between any auditory low level processing variable and writing skills. These data do not support the rapid auditory processing deficit theory.

  11. Tuning in to the Voices: A Multisite fMRI Study of Auditory Hallucinations

    PubMed Central

    Ford, Judith M.; Roach, Brian J.; Jorgensen, Kasper W.; Turner, Jessica A.; Brown, Gregory G.; Notestine, Randy; Bischoff-Grethe, Amanda; Greve, Douglas; Wible, Cynthia; Lauriello, John; Belger, Aysenil; Mueller, Bryon A.; Calhoun, Vincent; Preda, Adrian; Keator, David; O'Leary, Daniel S.; Lim, Kelvin O.; Glover, Gary; Potkin, Steven G.; Mathalon, Daniel H.

    2009-01-01

    Introduction: Auditory hallucinations or voices are experienced by 75% of people diagnosed with schizophrenia. We presumed that auditory cortex of schizophrenia patients who experience hallucinations is tonically “tuned” to internal auditory channels, at the cost of processing external sounds, both speech and nonspeech. Accordingly, we predicted that patients who hallucinate would show less auditory cortical activation to external acoustic stimuli than patients who did not. Methods: At 9 Functional Imaging Biomedical Informatics Research Network (FBIRN) sites, whole-brain images from 106 patients and 111 healthy comparison subjects were collected while subjects performed an auditory target detection task. Data were processed with the FBIRN processing stream. A region of interest analysis extracted activation values from primary (BA41) and secondary auditory cortex (BA42), auditory association cortex (BA22), and middle temporal gyrus (BA21). Patients were sorted into hallucinators (n = 66) and nonhallucinators (n = 40) based on symptom ratings done during the previous week. Results: Hallucinators had less activation to probe tones in left primary auditory cortex (BA41) than nonhallucinators. This effect was not seen on the right. Discussion: Although “voices” are the anticipated sensory experience, it appears that even primary auditory cortex is “turned on” and “tuned in” to process internal acoustic information at the cost of processing external sounds. Although this study was not designed to probe cortical competition for auditory resources, we were able to take advantage of the data and find significant effects, perhaps because of the power afforded by such a large sample. PMID:18987102

  12. Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness

    PubMed Central

    Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.

    2014-01-01

    Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752

  13. Long-range synchrony of gamma oscillations and auditory hallucination symptoms in schizophrenia

    PubMed Central

    Mulert, C.; Kirsch; Pascual-Marqui, Roberto; McCarley, Robert W.; Spencer, Kevin M.

    2010-01-01

    Phase locking in the gamma-band range has been shown to be diminished in patients with schizophrenia. Moreover, there have been reports of positive correlations between phase locking in the gamma-band range and positive symptoms, especially hallucinations. The aim of the present study was to use a new methodological approach in order to investigate gamma-band phase synchronization between the left and right auditory cortex in patients with schizophrenia and its relationship to auditory hallucinations. Subjects were 18 patients with chronic schizophrenia (SZ) and 16 healthy control (HC) subjects. Auditory hallucination symptom scores were obtained using the Scale for the Assessment of Positive Symptoms. Stimuli were 40-Hz binaural click trains. The generators of the 40 Hz-ASSR were localized using eLORETA and based on the computed intracranial signals lagged interhemispheric phase locking between primary and secondary auditory cortices was analyzed. Current source density of the 40 ASSR response was significantly diminished in SZ in comparison to HC in the right superior and middle temporal gyrus (p<0.05). Interhemispheric phase locking was reduced in SZ in comparison to HC for the primary auditory cortices (p<0.05) but not in the secondary auditory cortices. A significant positive correlation was found between auditory hallucination symptom scores and phase synchronization between the primary auditory cortices (p<0.05, corrected for multiple testing) but not for the secondary auditory cortices. These results suggest that long-range synchrony of gamma oscillations is disturbed in schizophrenia and that this deficit is related to clinical symptoms such as auditory hallucinations. PMID:20713096

  14. Middle Latency Auditory Evoked Potential (MLAEP) in Workers with and without Tinnitus who are Exposed to Occupational Noise.

    PubMed

    dos Santos Filha, Valdete Alves Valentins; Samelli, Alessandra Giannella; Matas, Carla Gentile

    2015-09-11

    Tinnitus is an important occupational health concern, but few studies have focused on the central auditory pathways of workers with a history of occupational noise exposure. Thus, we analyzed the central auditory pathways of workers with a history of occupational noise exposure who had normal hearing threshold, and compared middle latency auditory evoked potential in those with and without noise-induced tinnitus. Sixty individuals (30 with and 30 without tinnitus) underwent the following procedures: anamnesis, immittance measures, pure-tone air conduction thresholds at all frequencies between 0.25-8 kHz, and middle latency auditory evoked potentials. Quantitative analysis of latencies and amplitudes of middle latency auditory evoked potential showed no significant differences between the groups with and without tinnitus. In the qualitative analysis, we found that both groups showed increased middle latency auditory evoked potential latencies. The study group had more alterations of the "both" type regarding the Na-Pa amplitude, while the control group had more "electrode effect" alterations, but these alterations were not significantly different when compared to controls. Individuals with normal hearing with or without tinnitus who are exposed to occupational noise have altered middle latency auditory evoked potential, suggesting impairment of the auditory pathways in cortical and subcortical regions. Although differences did not reach significance, individuals with tinnitus seemed to have more abnormalities in components of the middle latency auditory evoked potential when compared to individuals without tinnitus, suggesting alterations in the generation and transmission of neuroelectrical impulses along the auditory pathway.

  15. Tuning Shifts of the Auditory System By Corticocortical and Corticofugal Projections and Conditioning

    PubMed Central

    Suga, Nobuo

    2011-01-01

    The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and non-lemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascending system. The corticocortical and corticofugal projections modulate auditory signal processing and play an essential role in the plasticity of the auditory system. Focal electric stimulation -- comparable to repetitive tonal stimulation -- of the lemniscal system evokes three major types of changes in the physiological properties, such as the tuning to specific values of acoustic parameters of cortical and subcortical auditory neurons through different combinations of facilitation and inhibition. For such changes, a neuromodulator, acetylcholine, plays an essential role. Electric stimulation of the nonlemniscal system evokes changes in the lemniscal system that is different from those evoked by the lemniscal stimulation. Auditory signals ascending from the lemniscal and nonlemniscal thalamic nuclei to the cortical auditory areas appear to be selected or adjusted by a “differential” gating mechanism. Conditioning for associative learning and pseudo-conditioning for nonassociative learning respectively elicit tone-specific and nonspecific plastic changes. The lemniscal, corticofugal and cholinergic systems are involved in eliciting the former, but not the latter. The current article reviews the recent progress in the research of corticocortical and corticofugal modulations of the auditory system and its plasticity elicited by conditioning and pseudo-conditioning. PMID:22155273

  16. Fit for the frontline? A focus group exploration of auditory tasks carried out by infantry and combat support personnel.

    PubMed

    Bevis, Zoe L; Semeraro, Hannah D; van Besouw, Rachel M; Rowan, Daniel; Lineton, Ben; Allsopp, Adrian J

    2014-01-01

    In order to preserve their operational effectiveness and ultimately their survival, military personnel must be able to detect important acoustic signals and maintain situational awareness. The possession of sufficient hearing ability to perform job-specific auditory tasks is defined as auditory fitness for duty (AFFD). Pure tone audiometry (PTA) is used to assess AFFD in the UK military; however, it is unclear whether PTA is able to accurately predict performance on job-specific auditory tasks. The aim of the current study was to gather information about auditory tasks carried out by infantry personnel on the frontline and the environment these tasks are performed in. The study consisted of 16 focus group interviews with an average of five participants per group. Eighty British army personnel were recruited from five infantry regiments. The focus group guideline included seven open-ended questions designed to elicit information about the auditory tasks performed on operational duty. Content analysis of the data resulted in two main themes: (1) the auditory tasks personnel are expected to perform and (2) situations where personnel felt their hearing ability was reduced. Auditory tasks were divided into subthemes of sound detection, speech communication and sound localization. Reasons for reduced performance included background noise, hearing protection and attention difficulties. The current study provided an important and novel insight to the complex auditory environment experienced by British infantry personnel and identified 17 auditory tasks carried out by personnel on operational duties. These auditory tasks will be used to inform the development of a functional AFFD test for infantry personnel.

  17. Computational modeling of the human auditory periphery: Auditory-nerve responses, evoked potentials and hearing loss.

    PubMed

    Verhulst, Sarah; Altoè, Alessandro; Vasilkov, Viacheslav

    2018-03-01

    Models of the human auditory periphery range from very basic functional descriptions of auditory filtering to detailed computational models of cochlear mechanics, inner-hair cell (IHC), auditory-nerve (AN) and brainstem signal processing. It is challenging to include detailed physiological descriptions of cellular components into human auditory models because single-cell data stems from invasive animal recordings while human reference data only exists in the form of population responses (e.g., otoacoustic emissions, auditory evoked potentials). To embed physiological models within a comprehensive human auditory periphery framework, it is important to capitalize on the success of basic functional models of hearing and render their descriptions more biophysical where possible. At the same time, comprehensive models should capture a variety of key auditory features, rather than fitting their parameters to a single reference dataset. In this study, we review and improve existing models of the IHC-AN complex by updating their equations and expressing their fitting parameters into biophysical quantities. The quality of the model framework for human auditory processing is evaluated using recorded auditory brainstem response (ABR) and envelope-following response (EFR) reference data from normal and hearing-impaired listeners. We present a model with 12 fitting parameters from the cochlea to the brainstem that can be rendered hearing impaired to simulate how cochlear gain loss and synaptopathy affect human population responses. The model description forms a compromise between capturing well-described single-unit IHC and AN properties and human population response features. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Detection and classification of underwater targets by echolocating dolphins

    NASA Astrophysics Data System (ADS)

    Au, Whitlow

    2003-10-01

    Many experiments have been performed with echolocating dolphins to determine their target detection and discrimination capabilities. Target detection experiments have been performed in a naturally noisy environment, with masking noise and with both phantom echoes and masking noise, and in reverberation. The echo energy to rms noise spectral density for the Atlantic bottlenose dolphin (Tursiops truncatus) at the 75% correct response threshold is approximately 7.5 dB whereas for the beluga whale (Delphinapterus leucas) the threshold is approximately 1 dB. The dolphin's detection threshold in reverberation is approximately 2.5 dB vs 2 dB for the beluga. The difference in performance between species can probably be ascribed to differences in how both species perceived the task. The bottlenose dolphin may be performing a combination detection/discrimination task whereas the beluga may be performing a simple detection task. Echolocating dolphins also have the capability to make fine discriminate of target properties such as wall thickness difference of water-filled cylinders and material differences in metallic plates. The high resolution property of the animal's echolocation signals and the high dynamic range of its auditory system are important factors in their outstanding discrimination capabilities.

  19. Comorbidity of Auditory Processing, Language, and Reading Disorders

    ERIC Educational Resources Information Center

    Sharma, Mridula; Purdy, Suzanne C.; Kelly, Andrea S.

    2009-01-01

    Purpose: The authors assessed comorbidity of auditory processing disorder (APD), language impairment (LI), and reading disorder (RD) in school-age children. Method: Children (N = 68) with suspected APD and nonverbal IQ standard scores of 80 or more were assessed using auditory, language, reading, attention, and memory measures. Auditory processing…

  20. Utilizing Oral-Motor Feedback in Auditory Conceptualization.

    ERIC Educational Resources Information Center

    Howard, Marilyn

    The Auditory Discrimination in Depth (ADD) program, an oral-motor approach to beginning reading instruction, trains first grade children in auditory skills by a process in which language and oral-motor feedback are used to integrate auditory properties with visual properties. This emphasis of the ADD program makes the child's perceptual…

  1. Auditory Phoneme Discrimination in Illiterates: Mismatch Negativity--A Question of Literacy?

    ERIC Educational Resources Information Center

    Schaadt, Gesa; Pannekamp, Ann; van der Meer, Elke

    2013-01-01

    These days, illiteracy is still a major problem. There is empirical evidence that auditory phoneme discrimination is one of the factors contributing to written language acquisition. The current study investigated auditory phoneme discrimination in participants who did not acquire written language sufficiently. Auditory phoneme discrimination was…

  2. Auditory Hypersensitivity in Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Lucker, Jay R.

    2013-01-01

    A review of records was completed to determine whether children with auditory hypersensitivities have difficulty tolerating loud sounds due to auditory-system factors or some other factors not directly involving the auditory system. Records of 150 children identified as not meeting autism spectrum disorders (ASD) criteria and another 50 meeting…

  3. Hidden Hearing Loss and Computational Models of the Auditory Pathway: Predicting Speech Intelligibility Decline

    DTIC Science & Technology

    2016-11-28

    of low spontaneous rate auditory nerve fibers (ANFs) and reduction of auditory brainstem response wave-I amplitudes. The goal of this research is...auditory nerve (AN) responses to speech stimuli under a variety of difficult listening conditions. The resulting cochlear neurogram, a spectrogram

  4. Feature Assignment in Perception of Auditory Figure

    ERIC Educational Resources Information Center

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  5. Audiovisual training is better than auditory-only training for auditory-only speech-in-noise identification.

    PubMed

    Lidestam, Björn; Moradi, Shahram; Pettersson, Rasmus; Ricklefs, Theodor

    2014-08-01

    The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identification was measured before and after the training (or control activity). Results showed that only audiovisual training improved speech-in-noise identification, demonstrating superiority over auditory-only training.

  6. Memory for sound, with an ear toward hearing in complex auditory scenes.

    PubMed

    Snyder, Joel S; Gregg, Melissa K

    2011-10-01

    An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.

  7. Mechanism of auditory hypersensitivity in human autism using autism model rats.

    PubMed

    Ida-Eto, Michiru; Hara, Nao; Ohkawara, Takeshi; Narita, Masaaki

    2017-04-01

    Auditory hypersensitivity is one of the major complications in autism spectrum disorder. The aim of this study was to investigate whether the auditory brain center is affected in autism model rats. Autism model rats were prepared by prenatal exposure to thalidomide on embryonic day 9 and 10 in pregnant rats. The superior olivary complex (SOC), a complex of auditory nuclei, was immunostained with anti-calbindin d28k antibody at postnatal day 50. In autism model rats, SOC immunoreactivity was markedly decreased. Strength of immunostaining of SOC auditory fibers was also weak in autism model rats. Surprisingly, the size of the medial nucleus of trapezoid body, a nucleus exerting inhibitory function in SOC, was significantly decreased in autism model rats. Auditory hypersensitivity may be, in part, due to impairment of inhibitory processing by the auditory brain center. © 2016 Japan Pediatric Society.

  8. Phonological Processing in Human Auditory Cortical Fields

    PubMed Central

    Woods, David L.; Herron, Timothy J.; Cate, Anthony D.; Kang, Xiaojian; Yund, E. W.

    2011-01-01

    We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features. PMID:21541252

  9. Auditory spatial processing in the human cortex.

    PubMed

    Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C

    2012-12-01

    The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.

  10. Acute auditory agnosia as the presenting hearing disorder in MELAS.

    PubMed

    Miceli, Gabriele; Conti, Guido; Cianfoni, Alessandro; Di Giacopo, Raffaella; Zampetti, Patrizia; Servidei, Serenella

    2008-12-01

    MELAS is commonly associated with peripheral hearing loss. Auditory agnosia is a rare cortical auditory impairment, usually due to bilateral temporal damage. We document, for the first time, auditory agnosia as the presenting hearing disorder in MELAS. A young woman with MELAS (A3243G mtDNA mutation) suffered from acute cortical hearing damage following a single stroke-like episode, in the absence of previous hearing deficits. Audiometric testing showed marked central hearing impairment and very mild sensorineural hearing loss. MRI documented bilateral, acute lesions to superior temporal regions. Neuropsychological tests demonstrated auditory agnosia without aphasia. Our data and a review of published reports show that cortical auditory disorders are relatively frequent in MELAS, probably due to the strikingly high incidence of bilateral and symmetric damage following stroke-like episodes. Acute auditory agnosia can be the presenting hearing deficit in MELAS and, conversely, MELAS should be suspected in young adults with sudden hearing loss.

  11. Should visual speech cues (speechreading) be considered when fitting hearing aids?

    NASA Astrophysics Data System (ADS)

    Grant, Ken

    2002-05-01

    When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.

  12. Classification of passive auditory event-related potentials using discriminant analysis and self-organizing feature maps.

    PubMed

    Schönweiler, R; Wübbelt, P; Tolloczko, R; Rose, C; Ptok, M

    2000-01-01

    Discriminant analysis (DA) and self-organizing feature maps (SOFM) were used to classify passively evoked auditory event-related potentials (ERP) P(1), N(1), P(2) and N(2). Responses from 16 children with severe behavioral auditory perception deficits, 16 children with marked behavioral auditory perception deficits, and 14 controls were examined. Eighteen ERP amplitude parameters were selected for examination of statistical differences between the groups. Different DA methods and SOFM configurations were trained to the values. SOFM had better classification results than DA methods. Subsequently, measures on another 37 subjects that were unknown for the trained SOFM were used to test the reliability of the system. With 10-dimensional vectors, reliable classifications were obtained that matched behavioral auditory perception deficits in 96%, implying central auditory processing disorder (CAPD). The results also support the assumption that CAPD includes a 'non-peripheral' auditory processing deficit. Copyright 2000 S. Karger AG, Basel.

  13. Dislocation of the incus into the external auditory canal after mountain-biking accident.

    PubMed

    Saito, T; Kono, Y; Fukuoka, Y; Yamamoto, H; Saito, H

    2001-01-01

    We report a rare case of incus dislocation to the external auditory canal after a mountain-biking accident. Otoscopy showed ossicular protrusion in the upper part of the left external auditory canal. CT indicated the disappearance of the incus, and an incus-like bone was found in the left external auditory canal. There was another bony and board-like structure in the attic. During the surgery, a square-shaped bony plate (1 x 1 cm) was found in the attic. It was determined that the bony plate had fallen from the tegmen of the attic. The fracture line in the posterosuperior auditory canal extending to the fossa incudis was identified. According to these findings, it was considered that the incus was pushed into the external auditory canal by the impact of skull injury through the fractured posterosuperior auditory canal, which opened widely enough for incus dislocation. Copyright 2001 S. Karger AG, Basel

  14. 21 CFR 874.3320 - Group hearing aid or group auditory trainer.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Group hearing aid or group auditory trainer. 874... SERVICES (CONTINUED) MEDICAL DEVICES EAR, NOSE, AND THROAT DEVICES Prosthetic Devices § 874.3320 Group hearing aid or group auditory trainer. (a) Identification. A group hearing aid or group auditory trainer...

  15. 21 CFR 874.3320 - Group hearing aid or group auditory trainer.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Group hearing aid or group auditory trainer. 874... SERVICES (CONTINUED) MEDICAL DEVICES EAR, NOSE, AND THROAT DEVICES Prosthetic Devices § 874.3320 Group hearing aid or group auditory trainer. (a) Identification. A group hearing aid or group auditory trainer...

  16. 21 CFR 874.3320 - Group hearing aid or group auditory trainer.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Group hearing aid or group auditory trainer. 874... SERVICES (CONTINUED) MEDICAL DEVICES EAR, NOSE, AND THROAT DEVICES Prosthetic Devices § 874.3320 Group hearing aid or group auditory trainer. (a) Identification. A group hearing aid or group auditory trainer...

  17. Brain Mapping of Language and Auditory Perception in High-Functioning Autistic Adults: A PET Study.

    ERIC Educational Resources Information Center

    Muller, R-A.; Behen, M. E.; Rothermel, R. D.; Chugani, D. C.; Muzik, O.; Mangner, T. J.; Chugani, H. T.

    1999-01-01

    A study used positron emission tomography (PET) to study patterns of brain activation during auditory processing in five high-functioning adults with autism. Results found that participants showed reversed hemispheric dominance during the verbal auditory stimulation and reduced activation of the auditory cortex and cerebellum. (CR)

  18. Temporal Influence on Awareness

    DTIC Science & Technology

    1995-12-01

    43 38. Test Setup Timing: Measured vs Expected Modal Delays (in ms) ............. 46 39. Experiment I: visual and auditory stimuli...presented simultaneously; visual- auditory delay=Oms, visual-visual delay=0ms ....... .......................... 47 40. Experiment II: visual and auditory ...stimuli presented in order; visual- auditory de- lay=Oms, visual-visual delay=variable ................................ 48 41. Experiment II: visual and

  19. 21 CFR 874.3320 - Group hearing aid or group auditory trainer.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Group hearing aid or group auditory trainer. 874... SERVICES (CONTINUED) MEDICAL DEVICES EAR, NOSE, AND THROAT DEVICES Prosthetic Devices § 874.3320 Group hearing aid or group auditory trainer. (a) Identification. A group hearing aid or group auditory trainer...

  20. Auditory Processing of Amplitude Envelope Rise Time in Adults Diagnosed with Developmental Dyslexia

    ERIC Educational Resources Information Center

    Pasquini, Elisabeth S.; Corriveau, Kathleen H.; Goswami, Usha

    2007-01-01

    Studies of basic (nonspeech) auditory processing in adults thought to have developmental dyslexia have yielded a variety of data. Yet there has been little consensus regarding the explanatory value of auditory processing in accounting for reading difficulties. Recently, however, a number of studies of basic auditory processing in children with…

  1. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  2. Auditory Temporal Processing as a Specific Deficit among Dyslexic Readers

    ERIC Educational Resources Information Center

    Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit

    2012-01-01

    The present study focuses on examining the hypothesis that auditory temporal perception deficit is a basic cause for reading disabilities among dyslexics. This hypothesis maintains that reading impairment is caused by a fundamental perceptual deficit in processing rapid auditory or visual stimuli. Since the auditory perception involves a number of…

  3. Maturation of Rapid Auditory Temporal Processing and Subsequent Nonword Repetition Performance in Children

    ERIC Educational Resources Information Center

    Fox, Allison M.; Reid, Corinne L.; Anderson, Mike; Richardson, Cassandra; Bishop, Dorothy V. M.

    2012-01-01

    According to the rapid auditory processing theory, the ability to parse incoming auditory information underpins learning of oral and written language. There is wide variation in this low-level perceptual ability, which appears to follow a protracted developmental course. We studied the development of rapid auditory processing using event-related…

  4. 21 CFR 874.1090 - Auditory impedance tester.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Auditory impedance tester. 874.1090 Section 874...) MEDICAL DEVICES EAR, NOSE, AND THROAT DEVICES Diagnostic Devices § 874.1090 Auditory impedance tester. (a) Identification. An auditory impedance tester is a device that is intended to change the air pressure in the...

  5. 21 CFR 874.1090 - Auditory impedance tester.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Auditory impedance tester. 874.1090 Section 874...) MEDICAL DEVICES EAR, NOSE, AND THROAT DEVICES Diagnostic Devices § 874.1090 Auditory impedance tester. (a) Identification. An auditory impedance tester is a device that is intended to change the air pressure in the...

  6. Neural circuits in Auditory and Audiovisual Memory

    PubMed Central

    Plakke, B.; Romanski, L.M.

    2016-01-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. PMID:26656069

  7. Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome.

    PubMed

    Hove, Michael J; Iversen, John R; Zhang, Allen; Repp, Bruno H

    2013-07-01

    Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target-distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.

  8. Auditory motion processing after early blindness

    PubMed Central

    Jiang, Fang; Stecker, G. Christopher; Fine, Ione

    2014-01-01

    Studies showing that occipital cortex responds to auditory and tactile stimuli after early blindness are often interpreted as demonstrating that early blind subjects “see” auditory and tactile stimuli. However, it is not clear whether these occipital responses directly mediate the perception of auditory/tactile stimuli, or simply modulate or augment responses within other sensory areas. We used fMRI pattern classification to categorize the perceived direction of motion for both coherent and ambiguous auditory motion stimuli. In sighted individuals, perceived motion direction was accurately categorized based on neural responses within the planum temporale (PT) and right lateral occipital cortex (LOC). Within early blind individuals, auditory motion decisions for both stimuli were successfully categorized from responses within the human middle temporal complex (hMT+), but not the PT or right LOC. These findings suggest that early blind responses within hMT+ are associated with the perception of auditory motion, and that these responses in hMT+ may usurp some of the functions of nondeprived PT. Thus, our results provide further evidence that blind individuals do indeed “see” auditory motion. PMID:25378368

  9. Reproduction of auditory and visual standards in monochannel cochlear implant users.

    PubMed

    Kanabus, Magdalena; Szelag, Elzbieta; Kolodziejczyk, Iwona; Szuchnik, Joanna

    2004-01-01

    The temporal reproduction of standard durations ranging from 1 to 9 seconds was investigated in monochannel cochlear implant (CI) users and in normally hearing subjects for the auditory and visual modality. The results showed that the pattern of performance in patients depended on their level of auditory comprehension. Results for CI users, who displayed relatively good auditory comprehension, did not differ from that of normally hearing subjects for both modalities. Patients with poor auditory comprehension significantly overestimated shorter auditory standards (1, 1.5 and 2.5 s), compared to both patients with good comprehension and controls. For the visual modality the between-group comparisons were not significant. These deficits in the reproduction of auditory standards were explained in accordance with both the attentional-gate model and the role of working memory in prospective time judgment. The impairments described above can influence the functioning of the temporal integration mechanism that is crucial for auditory speech comprehension on the level of words and phrases. We postulate that the deficits in time reproduction of short standards may be one of the possible reasons for poor speech understanding in monochannel CI users.

  10. The influence of (central) auditory processing disorder on the severity of speech-sound disorders in children.

    PubMed

    Vilela, Nadia; Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Sanches, Seisse Gabriela Gandolfi; Wertzner, Haydée Fiszbein; Carvallo, Renata Mota Mamede

    2016-02-01

    To identify a cutoff value based on the Percentage of Consonants Correct-Revised index that could indicate the likelihood of a child with a speech-sound disorder also having a (central) auditory processing disorder . Language, audiological and (central) auditory processing evaluations were administered. The participants were 27 subjects with speech-sound disorders aged 7 to 10 years and 11 months who were divided into two different groups according to their (central) auditory processing evaluation results. When a (central) auditory processing disorder was present in association with a speech disorder, the children tended to have lower scores on phonological assessments. A greater severity of speech disorder was related to a greater probability of the child having a (central) auditory processing disorder. The use of a cutoff value for the Percentage of Consonants Correct-Revised index successfully distinguished between children with and without a (central) auditory processing disorder. The severity of speech-sound disorder in children was influenced by the presence of (central) auditory processing disorder. The attempt to identify a cutoff value based on a severity index was successful.

  11. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    PubMed

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  12. The effects of divided attention on auditory priming.

    PubMed

    Mulligan, Neil W; Duke, Marquinn; Cooper, Angela W

    2007-09-01

    Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.

  13. Enhanced attention-dependent activity in the auditory cortex of older musicians.

    PubMed

    Zendel, Benjamin Rich; Alain, Claude

    2014-01-01

    Musical training improves auditory processing abilities, which correlates with neuro-plastic changes in exogenous (input-driven) and endogenous (attention-dependent) components of auditory event-related potentials (ERPs). Evidence suggests that musicians, compared to non-musicians, experience less age-related decline in auditory processing abilities. Here, we investigated whether lifelong musicianship mitigates exogenous or endogenous processing by measuring auditory ERPs in younger and older musicians and non-musicians while they either attended to auditory stimuli or watched a muted subtitled movie of their choice. Both age and musical training-related differences were observed in the exogenous components; however, the differences between musicians and non-musicians were similar across the lifespan. These results suggest that exogenous auditory ERPs are enhanced in musicians, but decline with age at the same rate. On the other hand, attention-related activity, modeled in the right auditory cortex using a discrete spatiotemporal source analysis, was selectively enhanced in older musicians. This suggests that older musicians use a compensatory strategy to overcome age-related decline in peripheral and exogenous processing of acoustic information. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Is sensorimotor BCI performance influenced differently by mono, stereo, or 3-D auditory feedback?

    PubMed

    McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh

    2014-05-01

    Imagination of movement can be used as a control method for a brain-computer interface (BCI) allowing communication for the physically impaired. Visual feedback within such a closed loop system excludes those with visual problems and hence there is a need for alternative sensory feedback pathways. In the context of substituting the visual channel for the auditory channel, this study aims to add to the limited evidence that it is possible to substitute visual feedback for its auditory equivalent and assess the impact this has on BCI performance. Secondly, the study aims to determine for the first time if the type of auditory feedback method influences motor imagery performance significantly. Auditory feedback is presented using a stepped approach of single (mono), double (stereo), and multiple (vector base amplitude panning as an audio game) loudspeaker arrangements. Visual feedback involves a ball-basket paradigm and a spaceship game. Each session consists of either auditory or visual feedback only with runs of each type of feedback presentation method applied in each session. Results from seven subjects across five sessions of each feedback type (visual, auditory) (10 sessions in total) show that auditory feedback is a suitable substitute for the visual equivalent and that there are no statistical differences in the type of auditory feedback presented across five sessions.

  15. Prestimulus Network Integration of Auditory Cortex Predisposes Near-Threshold Perception Independently of Local Excitability

    PubMed Central

    Leske, Sabine; Ruhnau, Philipp; Frey, Julia; Lithari, Chrysa; Müller, Nadia; Hartmann, Thomas; Weisz, Nathan

    2015-01-01

    An ever-increasing number of studies are pointing to the importance of network properties of the brain for understanding behavior such as conscious perception. However, with regards to the influence of prestimulus brain states on perception, this network perspective has rarely been taken. Our recent framework predicts that brain regions crucial for a conscious percept are coupled prior to stimulus arrival, forming pre-established pathways of information flow and influencing perceptual awareness. Using magnetoencephalography (MEG) and graph theoretical measures, we investigated auditory conscious perception in a near-threshold (NT) task and found strong support for this framework. Relevant auditory regions showed an increased prestimulus interhemispheric connectivity. The left auditory cortex was characterized by a hub-like behavior and an enhanced integration into the brain functional network prior to perceptual awareness. Right auditory regions were decoupled from non-auditory regions, presumably forming an integrated information processing unit with the left auditory cortex. In addition, we show for the first time for the auditory modality that local excitability, measured by decreased alpha power in the auditory cortex, increases prior to conscious percepts. Importantly, we were able to show that connectivity states seem to be largely independent from local excitability states in the context of a NT paradigm. PMID:26408799

  16. Auditory preferences of young children with and without hearing loss for meaningful auditory-visual compound stimuli.

    PubMed

    Zupan, Barbra; Sussman, Joan E

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.

  17. Differential Diagnosis of Speech Sound Disorder (Phonological Disorder): Audiological Assessment beyond the Pure-tone Audiogram.

    PubMed

    Iliadou, Vasiliki Vivian; Chermak, Gail D; Bamiou, Doris-Eva

    2015-04-01

    According to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, diagnosis of speech sound disorder (SSD) requires a determination that it is not the result of other congenital or acquired conditions, including hearing loss or neurological conditions that may present with similar symptomatology. To examine peripheral and central auditory function for the purpose of determining whether a peripheral or central auditory disorder was an underlying factor or contributed to the child's SSD. Central auditory processing disorder clinic pediatric case reports. Three clinical cases are reviewed of children with diagnosed SSD who were referred for audiological evaluation by their speech-language pathologists as a result of slower than expected progress in therapy. Audiological testing revealed auditory deficits involving peripheral auditory function or the central auditory nervous system. These cases demonstrate the importance of increasing awareness among professionals of the need to fully evaluate the auditory system to identify auditory deficits that could contribute to a patient's speech sound (phonological) disorder. Audiological assessment in cases of suspected SSD should not be limited to pure-tone audiometry given its limitations in revealing the full range of peripheral and central auditory deficits, deficits which can compromise treatment of SSD. American Academy of Audiology.

  18. Effects of selective attention on the electrophysiological representation of concurrent sounds in the human auditory cortex.

    PubMed

    Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier

    2007-08-29

    In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.

  19. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans

    PubMed Central

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  20. Auditory, visual, and auditory-visual perceptions of emotions by young children with hearing loss versus children with normal hearing.

    PubMed

    Most, Tova; Michaelis, Hilit

    2012-08-01

    This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.

  1. Interhemispheric transfer time in patients with auditory hallucinations: an auditory event-related potential study.

    PubMed

    Henshall, Katherine R; Sergejew, Alex A; McKay, Colette M; Rance, Gary; Shea, Tracey L; Hayden, Melissa J; Innes-Brown, Hamish; Copolov, David L

    2012-05-01

    Central auditory processing in schizophrenia patients with a history of auditory hallucinations has been reported to be impaired, and abnormalities of interhemispheric transfer have been implicated in these patients. This study examined interhemispheric functional connectivity between auditory cortical regions, using temporal information obtained from latency measures of the auditory N1 evoked potential. Interhemispheric Transfer Times (IHTTs) were compared across 3 subject groups: schizophrenia patients who had experienced auditory hallucinations, schizophrenia patients without a history of auditory hallucinations, and normal controls. Pure tones and single-syllable words were presented monaurally to each ear, while EEG was recorded continuously. IHTT was calculated for each stimulus type by comparing the latencies of the auditory N1 evoked potential recorded contralaterally and ipsilaterally to the ear of stimulation. The IHTTs for pure tones did not differ between groups. For word stimuli, the IHTT was significantly different across the 3 groups: the IHTT was close to zero in normal controls, was highest in the AH group, and was negative (shorter latencies ipsilaterally) in the nonAH group. Differences in IHTTs may be attributed to transcallosal dysfunction in the AH group, but altered or reversed cerebral lateralization in nonAH participants is also possible. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    PubMed

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  3. Cerebral responses to local and global auditory novelty under general anesthesia

    PubMed Central

    Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir

    2017-01-01

    Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices. PMID:27502046

  4. Forebrain pathway for auditory space processing in the barn owl.

    PubMed

    Cohen, Y E; Miller, G L; Knudsen, E I

    1998-02-01

    The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.

  5. FMRI investigation of cross-modal interactions in beat perception: Audition primes vision, but not vice versa

    PubMed Central

    Grahn, Jessica A.; Henry, Molly J.; McAuley, J. Devin

    2011-01-01

    How we measure time and integrate temporal cues from different sensory modalities are fundamental questions in neuroscience. Sensitivity to a “beat” (such as that routinely perceived in music) differs substantially between auditory and visual modalities. Here we examined beat sensitivity in each modality, and examined cross-modal influences, using functional magnetic resonance imaging (fMRI) to characterize brain activity during perception of auditory and visual rhythms. In separate fMRI sessions, participants listened to auditory sequences or watched visual sequences. The order of auditory and visual sequence presentation was counterbalanced so that cross-modal order effects could be investigated. Participants judged whether sequences were speeding up or slowing down, and the pattern of tempo judgments was used to derive a measure of sensitivity to an implied beat. As expected, participants were less sensitive to an implied beat in visual sequences than in auditory sequences. However, visual sequences produced a stronger sense of beat when preceded by auditory sequences with identical temporal structure. Moreover, increases in brain activity were observed in the bilateral putamen for visual sequences preceded by auditory sequences when compared to visual sequences without prior auditory exposure. No such order-dependent differences (behavioral or neural) were found for the auditory sequences. The results provide further evidence for the role of the basal ganglia in internal generation of the beat and suggest that an internal auditory rhythm representation may be activated during visual rhythm perception. PMID:20858544

  6. Leftward lateralization of auditory cortex underlies holistic sound perception in Williams syndrome.

    PubMed

    Wengenroth, Martina; Blatow, Maria; Bendszus, Martin; Schneider, Peter

    2010-08-23

    Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.

  7. A novel hybrid auditory BCI paradigm combining ASSR and P300.

    PubMed

    Kaongoen, Netiwit; Jo, Sungho

    2017-03-01

    Brain-computer interface (BCI) is a technology that provides an alternative way of communication by translating brain activities into digital commands. Due to the incapability of using the vision-dependent BCI for patients who have visual impairment, auditory stimuli have been used to substitute the conventional visual stimuli. This paper introduces a hybrid auditory BCI that utilizes and combines auditory steady state response (ASSR) and spatial-auditory P300 BCI to improve the performance for the auditory BCI system. The system works by simultaneously presenting auditory stimuli with different pitches and amplitude modulation (AM) frequencies to the user with beep sounds occurring randomly between all sound sources. Attention to different auditory stimuli yields different ASSR and beep sounds trigger the P300 response when they occur in the target channel, thus the system can utilize both features for classification. The proposed ASSR/P300-hybrid auditory BCI system achieves 85.33% accuracy with 9.11 bits/min information transfer rate (ITR) in binary classification problem. The proposed system outperformed the P300 BCI system (74.58% accuracy with 4.18 bits/min ITR) and the ASSR BCI system (66.68% accuracy with 2.01 bits/min ITR) in binary-class problem. The system is completely vision-independent. This work demonstrates that combining ASSR and P300 BCI into a hybrid system could result in a better performance and could help in the development of the future auditory BCI. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. The role of spatiotemporal and spectral cues in segregating short sound events: evidence from auditory Ternus display.

    PubMed

    Wang, Qingcui; Bao, Ming; Chen, Lihan

    2014-01-01

    Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three 'within-frame intervals' (WFIs, or intervals between A and B, and between B and C), seven 'inter-frame intervals' (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound 'AB' to 'BC'). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the 'peripheral channeling' theory.

  9. Improved outcomes in auditory brainstem implantation with the use of near-field electrical compound action potentials.

    PubMed

    Mandalà, Marco; Colletti, Liliana; Colletti, Giacomo; Colletti, Vittorio

    2014-12-01

    To compare the outcomes (auditory threshold and open-set speech perception at 48-month follow-up) of a new near-field monitoring procedure, electrical compound action potential, on positioning the auditory brainstem implant electrode array on the surface of the cochlear nuclei versus the traditional far-field electrical auditory brainstem response. Retrospective study. Tertiary referral center. Among the 202 patients with auditory brainstem implants fitted and monitored with electrical auditory brainstem response during implant fitting, 9 also underwent electrical compound action potential recording. These subjects were matched retrospectively with a control group of 9 patients in whom only the electrical auditory brainstem response was recorded. Electrical compound action potentials were obtained using a cotton-wick recording electrode located near the surface of the cochlear nuclei and on several cranial nerves. Significantly lower potential thresholds were observed with the recording electrode located on the cochlear nuclei surface compared with the electrical auditory brainstem response (104.4 ± 32.5 vs 158.9 ± 24.2, P = .0030). Electrical brainstem response and compound action potentials identified effects on the neighboring cranial nerves on 3.2 ± 2.4 and 7.8 ± 3.2 electrodes, respectively (P = .0034). Open-set speech perception outcomes at 48-month follow-up had improved significantly in the near- versus far-field recording groups (78.9% versus 56.7%; P = .0051). Electrical compound action potentials during auditory brainstem implantation significantly improved the definition of the potential threshold and the number of auditory and extra-auditory waves generated. It led to the best coupling between the electrode array and cochlear nuclei, significantly improving the overall open-set speech perception. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2014.

  10. The 5% difference: early sensory processing predicts sarcasm perception in schizophrenia and schizo-affective disorder.

    PubMed

    Kantrowitz, J T; Hoptman, M J; Leitman, D I; Silipo, G; Javitt, D C

    2014-01-01

    Intact sarcasm perception is a crucial component of social cognition and mentalizing (the ability to understand the mental state of oneself and others). In sarcasm, tone of voice is used to negate the literal meaning of an utterance. In particular, changes in pitch are used to distinguish between sincere and sarcastic utterances. Schizophrenia patients show well-replicated deficits in auditory function and functional connectivity (FC) within and between auditory cortical regions. In this study we investigated the contributions of auditory deficits to sarcasm perception in schizophrenia. Auditory measures including pitch processing, auditory emotion recognition (AER) and sarcasm detection were obtained from 76 patients with schizophrenia/schizo-affective disorder and 72 controls. Resting-state FC (rsFC) was obtained from a subsample and was analyzed using seeds placed in both auditory cortex and meta-analysis-defined core-mentalizing regions relative to auditory performance. Patients showed large effect-size deficits across auditory measures. Sarcasm deficits correlated significantly with general functioning and impaired pitch processing both across groups and within the patient group alone. Patients also showed reduced sensitivity to alterations in mean pitch and variability. For patients, sarcasm discrimination correlated exclusively with the level of rsFC within primary auditory regions whereas for controls, correlations were observed exclusively within core-mentalizing regions (the right posterior superior temporal gyrus, anterior superior temporal sulcus and insula, and left posterior medial temporal gyrus). These findings confirm the contribution of auditory deficits to theory of mind (ToM) impairments in schizophrenia, and demonstrate that FC within auditory, but not core-mentalizing, regions is rate limiting with respect to sarcasm detection in schizophrenia.

  11. Auditory Alterations in Children Infected by Human Immunodeficiency Virus Verified Through Auditory Processing Test

    PubMed Central

    Romero, Ana Carla Leite; Alfaya, Lívia Marangoni; Gonçales, Alina Sanches; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima

    2016-01-01

    Introduction The auditory system of HIV-positive children may have deficits at various levels, such as the high incidence of problems in the middle ear that can cause hearing loss. Objective The objective of this study is to characterize the development of children infected by the Human Immunodeficiency Virus (HIV) in the Simplified Auditory Processing Test (SAPT) and the Staggered Spondaic Word Test. Methods We performed behavioral tests composed of the Simplified Auditory Processing Test and the Portuguese version of the Staggered Spondaic Word Test (SSW). The participants were 15 children infected by HIV, all using antiretroviral medication. Results The children had abnormal auditory processing verified by Simplified Auditory Processing Test and the Portuguese version of SSW. In the Simplified Auditory Processing Test, 60% of the children presented hearing impairment. In the SAPT, the memory test for verbal sounds showed more errors (53.33%); whereas in SSW, 86.67% of the children showed deficiencies indicating deficit in figure-ground, attention, and memory auditory skills. Furthermore, there are more errors in conditions of background noise in both age groups, where most errors were in the left ear in the Group of 8-year-olds, with similar results for the group aged 9 years. Conclusion The high incidence of hearing loss in children with HIV and comorbidity with several biological and environmental factors indicate the need for: 1) familiar and professional awareness of the impact on auditory alteration on the developing and learning of the children with HIV, and 2) access to educational plans and follow-up with multidisciplinary teams as early as possible to minimize the damage caused by auditory deficits. PMID:28050213

  12. Visual activity predicts auditory recovery from deafness after adult cochlear implantation.

    PubMed

    Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal

    2013-12-01

    Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.

  13. An analysis of nonlinear dynamics underlying neural activity related to auditory induction in the rat auditory cortex.

    PubMed

    Noto, M; Nishikawa, J; Tateno, T

    2016-03-24

    A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self-exciting system is a key element for qualitatively reproducing A1 population activity and to understand the underlying mechanisms. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. Mismatch negativity (MMN) reveals inefficient auditory ventral stream function in chronic auditory comprehension impairments.

    PubMed

    Robson, Holly; Cloutman, Lauren; Keidel, James L; Sage, Karen; Drakesmith, Mark; Welbourne, Stephen

    2014-10-01

    Auditory discrimination is significantly impaired in Wernicke's aphasia (WA) and thought to be causatively related to the language comprehension impairment which characterises the condition. This study used mismatch negativity (MMN) to investigate the neural responses corresponding to successful and impaired auditory discrimination in WA. Behavioural auditory discrimination thresholds of consonant-vowel-consonant (CVC) syllables and pure tones (PTs) were measured in WA (n = 7) and control (n = 7) participants. Threshold results were used to develop multiple deviant MMN oddball paradigms containing deviants which were either perceptibly or non-perceptibly different from the standard stimuli. MMN analysis investigated differences associated with group, condition and perceptibility as well as the relationship between MMN responses and comprehension (within which behavioural auditory discrimination profiles were examined). MMN waveforms were observable to both perceptible and non-perceptible auditory changes. Perceptibility was only distinguished by MMN amplitude in the PT condition. The WA group could be distinguished from controls by an increase in MMN response latency to CVC stimuli change. Correlation analyses displayed a relationship between behavioural CVC discrimination and MMN amplitude in the control group, where greater amplitude corresponded to better discrimination. The WA group displayed the inverse effect; both discrimination accuracy and auditory comprehension scores were reduced with increased MMN amplitude. In the WA group, a further correlation was observed between the lateralisation of MMN response and CVC discrimination accuracy; the greater the bilateral involvement the better the discrimination accuracy. The results from this study provide further evidence for the nature of auditory comprehension impairment in WA and indicate that the auditory discrimination deficit is grounded in a reduced ability to engage in efficient hierarchical processing and the construction of invariant auditory objects. Correlation results suggest that people with chronic WA may rely on an inefficient, noisy right hemisphere auditory stream when attempting to process speech stimuli.

  15. A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.

    PubMed

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2018-06-01

    The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants <18, nonhuman participants, and article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as neurophysiological substrates for auditory prediction. Tinnitus has been modeled as an auditory object which may demonstrate incomplete processing during auditory scene analysis resulting in tinnitus salience and therefore difficulty in habituation. Within the electrophysiological domain, there is currently mixed evidence regarding oscillatory band changes in tinnitus. There are theoretical proposals for a relationship between prediction error and tinnitus but few published empirical studies. American Academy of Audiology.

  16. Systems and methods for predicting materials properties

    DOEpatents

    Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano

    2007-11-06

    Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.

  17. Auditory Processing Disorder in Children

    MedlinePlus

    ... News & Events NIDCD News Inside NIDCD Newsletter Shareable Images ... Info » Hearing, Ear Infections, and Deafness Auditory Processing Disorder Auditory processing disorder (APD) describes a condition ...

  18. Blueprint for antimicrobial hit discovery targeting metabolic networks.

    PubMed

    Shen, Y; Liu, J; Estiu, G; Isin, B; Ahn, Y-Y; Lee, D-S; Barabási, A-L; Kapatral, V; Wiest, O; Oltvai, Z N

    2010-01-19

    Advances in genome analysis, network biology, and computational chemistry have the potential to revolutionize drug discovery by combining system-level identification of drug targets with the atomistic modeling of small molecules capable of modulating their activity. To demonstrate the effectiveness of such a discovery pipeline, we deduced common antibiotic targets in Escherichia coli and Staphylococcus aureus by identifying shared tissue-specific or uniformly essential metabolic reactions in their metabolic networks. We then predicted through virtual screening dozens of potential inhibitors for several enzymes of these reactions and showed experimentally that a subset of these inhibited both enzyme activities in vitro and bacterial cell viability. This blueprint is applicable for any sequenced organism with high-quality metabolic reconstruction and suggests a general strategy for strain-specific antiinfective therapy.

  19. Absolute distance measurement by dual-comb interferometry with multi-channel digital lock-in phase detection

    NASA Astrophysics Data System (ADS)

    Yang, Ruitao; Pollinger, Florian; Meiners-Hagen, Karl; Krystek, Michael; Tan, Jiubin; Bosse, Harald

    2015-08-01

    We present a dual-comb-based heterodyne multi-wavelength absolute interferometer capable of long distance measurements. The phase information of the various comb modes is extracted in parallel by a multi-channel digital lock-in phase detection scheme. Several synthetic wavelengths of the same order are constructed and the corresponding phases are averaged to deduce the absolute lengths with significantly reduced uncertainty. Comparison experiments with an incremental HeNe reference interferometer show a combined relative measurement uncertainty of 5.3 × 10-7 at a measurement distance of 20 m. Combining the advantage of synthetic wavelength interferometry and dual-comb interferometry, our compact and simple approach provides sufficient precision for many industrial applications.

  20. Highly sensitive detection of naphthalene in solvent vapor using a functionalized PBG refractive index sensor.

    PubMed

    Girschikofsky, Maiko; Rosenberger, Manuel; Belle, Stefan; Brutschy, Malte; Waldvogel, Siegfried R; Hellmann, Ralf

    2012-01-01

    We report an optical refractive index sensor system based on a planar Bragg grating which is functionalized by substituted γ-cyclodextrin to determine low concentrations of naphthalene in solvent vapor. The sensor system exhibits a quasi-instantaneous shift of the Bragg wavelength and is therefore capable for online detection. The overall shift of the Bragg wavelength reveals a linear relationship to the analyte concentration with a gradient of 12.5 ± 1.5 pm/ppm. Due to the spectral resolution and repeatability of the interrogation system, this corresponds to acquisition steps of 80 ppb. Taking into account the experimentally detected signal noise a minimum detection limit of 0.48 ± 0.05 ppm is deduced.

  1. Fragile Spectral and Temporal Auditory Processing in Adolescents with Autism Spectrum Disorder and Early Language Delay

    ERIC Educational Resources Information Center

    Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean

    2015-01-01

    We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…

  2. Auditory Preferences of Young Children with and without Hearing Loss for Meaningful Auditory-Visual Compound Stimuli

    ERIC Educational Resources Information Center

    Zupan, Barbra; Sussman, Joan E.

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…

  3. Medial Auditory Thalamus Is Necessary for Acquisition and Retention of Eyeblink Conditioning to Cochlear Nucleus Stimulation

    ERIC Educational Resources Information Center

    Halverson, Hunter E.; Poremba, Amy; Freeman, John H.

    2015-01-01

    Associative learning tasks commonly involve an auditory stimulus, which must be projected through the auditory system to the sites of memory induction for learning to occur. The cochlear nucleus (CN) projection to the pontine nuclei has been posited as the necessary auditory pathway for cerebellar learning, including eyeblink conditioning.…

  4. Early Visual Deprivation Severely Compromises the Auditory Sense of Space in Congenitally Blind Children

    ERIC Educational Resources Information Center

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-01-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind…

  5. Enhanced auditory temporal gap detection in listeners with musical training.

    PubMed

    Mishra, Srikanta K; Panda, Manas R; Herbert, Carolyn

    2014-08-01

    Many features of auditory perception are positively altered in musicians. Traditionally auditory mechanisms in musicians are investigated using the Western-classical musician model. The objective of the present study was to adopt an alternative model-Indian-classical music-to further investigate auditory temporal processing in musicians. This study presents that musicians have significantly lower across-channel gap detection thresholds compared to nonmusicians. Use of the South Indian musician model provides an increased external validity for the prediction, from studies on Western-classical musicians, that auditory temporal coding is enhanced in musicians.

  6. Thalamic and cortical pathways supporting auditory processing

    PubMed Central

    Lee, Charles C.

    2012-01-01

    The neural processing of auditory information engages pathways that begin initially at the cochlea and that eventually reach forebrain structures. At these higher levels, the computations necessary for extracting auditory source and identity information rely on the neuroanatomical connections between the thalamus and cortex. Here, the general organization of these connections in the medial geniculate body (thalamus) and the auditory cortex is reviewed. In addition, we consider two models organizing the thalamocortical pathways of the non-tonotopic and multimodal auditory nuclei. Overall, the transfer of information to the cortex via the thalamocortical pathways is complemented by the numerous intracortical and corticocortical pathways. Although interrelated, the convergent interactions among thalamocortical, corticocortical, and commissural pathways enable the computations necessary for the emergence of higher auditory perception. PMID:22728130

  7. NIDCD Glossary

    MedlinePlus

    ... ears. Brainstem Implant - auditory prosthesis that bypasses the cochlea and auditory nerve. This type of implant helps individuals who cannot benefit from a cochlear implant because the auditory nerves are not working. ...

  8. Estradiol-dependent Modulation of Serotonergic Markers in Auditory Areas of a Seasonally Breeding Songbird

    PubMed Central

    Matragrano, Lisa L.; Sanford, Sara E.; Salvante, Katrina G.; Beaulieu, Michaël; Sockman, Keith W.; Maney, Donna L.

    2011-01-01

    Because no organism lives in an unchanging environment, sensory processes must remain plastic so that in any context, they emphasize the most relevant signals. As the behavioral relevance of sociosexual signals changes along with reproductive state, the perception of those signals is altered by reproductive hormones such as estradiol (E2). We showed previously that in white-throated sparrows, immediate early gene responses in the auditory pathway of females are selective for conspecific male song only when plasma E2 is elevated to breeding-typical levels. In this study, we looked for evidence that E2-dependent modulation of auditory responses is mediated by serotonergic systems. In female nonbreeding white-throated sparrows treated with E2, the density of fibers immunoreactive for serotonin transporter innervating the auditory midbrain and rostral auditory forebrain increased compared with controls. E2 treatment also increased the concentration of the serotonin metabolite 5-HIAA in the caudomedial mesopallium of the auditory forebrain. In a second experiment, females exposed to 30 min of conspecific male song had higher levels of 5-HIAA in the caudomedial nidopallium of the auditory forebrain than birds not exposed to song. Overall, we show that in this seasonal breeder, (1) serotonergic fibers innervate auditory areas; (2) the density of those fibers is higher in females with breeding-typical levels of E2 than in nonbreeding, untreated females; and (3) serotonin is released in the auditory forebrain within minutes in response to conspecific vocalizations. Our results are consistent with the hypothesis that E2 acts via serotonin systems to alter auditory processing. PMID:21942431

  9. Age-equivalent top-down modulation during cross-modal selective attention.

    PubMed

    Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam

    2014-12-01

    Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.

  10. Visual processing affects the neural basis of auditory discrimination.

    PubMed

    Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko

    2008-12-01

    The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.

  11. Auditory temporal preparation induced by rhythmic cues during concurrent auditory working memory tasks.

    PubMed

    Cutanda, Diana; Correa, Ángel; Sanabria, Daniel

    2015-06-01

    The present study investigated whether participants can develop temporal preparation driven by auditory isochronous rhythms when concurrently performing an auditory working memory (WM) task. In Experiment 1, participants had to respond to an auditory target presented after a regular or an irregular sequence of auditory stimuli while concurrently performing a Sternberg-type WM task. Results showed that participants responded faster after regular compared with irregular rhythms and that this effect was not affected by WM load; however, the lack of a significant main effect of WM load made it difficult to draw any conclusion regarding the influence of the dual-task manipulation in Experiment 1. In order to enhance dual-task interference, Experiment 2 combined the auditory rhythm procedure with an auditory N-Back task, which required WM updating (monitoring and coding of the information) and was presumably more demanding than the mere rehearsal of the WM task used in Experiment 1. Results now clearly showed dual-task interference effects (slower reaction times [RTs] in the high- vs. the low-load condition). However, such interference did not affect temporal preparation induced by rhythms, with faster RTs after regular than after irregular sequences in the high-load and low-load conditions. These results revealed that secondary tasks demanding memory updating, relative to tasks just demanding rehearsal, produced larger interference effects on overall RTs in the auditory rhythm task. Nevertheless, rhythm regularity exerted a strong temporal preparation effect that survived the interference of the WM task even when both tasks competed for processing resources within the auditory modality. (c) 2015 APA, all rights reserved).

  12. Auditory Proprioceptive Integration: Effects of Real-Time Kinematic Auditory Feedback on Knee Proprioception

    PubMed Central

    Ghai, Shashank; Schmitz, Gerd; Hwang, Tong-Hun; Effenberg, Alfred O.

    2018-01-01

    The purpose of the study was to assess the influence of real-time auditory feedback on knee proprioception. Thirty healthy participants were randomly allocated to control (n = 15), and experimental group I (15). The participants performed an active knee-repositioning task using their dominant leg, with/without additional real-time auditory feedback where the frequency was mapped in a convergent manner to two different target angles (40 and 75°). Statistical analysis revealed significant enhancement in knee re-positioning accuracy for the constant and absolute error with real-time auditory feedback, within and across the groups. Besides this convergent condition, we established a second divergent condition. Here, a step-wise transposition of frequency was performed to explore whether a systematic tuning between auditory-proprioceptive repositioning exists. No significant effects were identified in this divergent auditory feedback condition. An additional experimental group II (n = 20) was further included. Here, we investigated the influence of a larger magnitude and directional change of step-wise transposition of the frequency. In a first step, results confirm the findings of experiment I. Moreover, significant effects on knee auditory-proprioception repositioning were evident when divergent auditory feedback was applied. During the step-wise transposition participants showed systematic modulation of knee movements in the opposite direction of transposition. We confirm that knee re-positioning accuracy can be enhanced with concurrent application of real-time auditory feedback and that knee re-positioning can modulated in a goal-directed manner with step-wise transposition of frequency. Clinical implications are discussed with respect to joint position sense in rehabilitation settings. PMID:29568259

  13. Leftward Lateralization of Auditory Cortex Underlies Holistic Sound Perception in Williams Syndrome

    PubMed Central

    Bendszus, Martin; Schneider, Peter

    2010-01-01

    Background Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. Methodology/Principal Findings Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. Conclusions/Significance There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties. PMID:20808792

  14. Gender-specific effects of prenatal and adolescent exposure to tobacco smoke on auditory and visual attention.

    PubMed

    Jacobsen, Leslie K; Slotkin, Theodore A; Mencl, W Einar; Frost, Stephen J; Pugh, Kenneth R

    2007-12-01

    Prenatal exposure to active maternal tobacco smoking elevates risk of cognitive and auditory processing deficits, and of smoking in offspring. Recent preclinical work has demonstrated a sex-specific pattern of reduction in cortical cholinergic markers following prenatal, adolescent, or combined prenatal and adolescent exposure to nicotine, the primary psychoactive component of tobacco smoke. Given the importance of cortical cholinergic neurotransmission to attentional function, we examined auditory and visual selective and divided attention in 181 male and female adolescent smokers and nonsmokers with and without prenatal exposure to maternal smoking. Groups did not differ in age, educational attainment, symptoms of inattention, or years of parent education. A subset of 63 subjects also underwent functional magnetic resonance imaging while performing an auditory and visual selective and divided attention task. Among females, exposure to tobacco smoke during prenatal or adolescent development was associated with reductions in auditory and visual attention performance accuracy that were greatest in female smokers with prenatal exposure (combined exposure). Among males, combined exposure was associated with marked deficits in auditory attention, suggesting greater vulnerability of neurocircuitry supporting auditory attention to insult stemming from developmental exposure to tobacco smoke in males. Activation of brain regions that support auditory attention was greater in adolescents with prenatal or adolescent exposure to tobacco smoke relative to adolescents with neither prenatal nor adolescent exposure to tobacco smoke. These findings extend earlier preclinical work and suggest that, in humans, prenatal and adolescent exposure to nicotine exerts gender-specific deleterious effects on auditory and visual attention, with concomitant alterations in the efficiency of neurocircuitry supporting auditory attention.

  15. Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions.

    PubMed

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2016-12-01

    Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Auditory processing deficits in bipolar disorder with and without a history of psychotic features.

    PubMed

    Zenisek, RyAnna; Thaler, Nicholas S; Sutton, Griffin P; Ringdahl, Erik N; Snyder, Joel S; Allen, Daniel N

    2015-11-01

    Auditory perception deficits have been identified in schizophrenia (SZ) and linked to dysfunction in the auditory cortex. Given that psychotic symptoms, including auditory hallucinations, are also seen in bipolar disorder (BD), it may be that individuals with BD who also exhibit psychotic symptoms demonstrate a similar impairment in auditory perception. Fifty individuals with SZ, 30 individuals with bipolar I disorder with a history of psychosis (BD+), 28 individuals with bipolar I disorder with no history of psychotic features (BD-), and 29 normal controls (NC) were administered a tone discrimination task and an emotion recognition task. Mixed-model analyses of covariance with planned comparisons indicated that individuals with BD+ performed at a level that was intermediate between those with BD- and those with SZ on the more difficult condition of the tone discrimination task and on the auditory condition of the emotion recognition task. There were no differences between the BD+ and BD- groups on the visual or auditory-visual affect recognition conditions. Regression analyses indicated that performance on the tone discrimination task predicted performance on all conditions of the emotion recognition task. Auditory hallucinations in BD+ were not related to performance on either task. Our findings suggested that, although deficits in frequency discrimination and emotion recognition are more severe in SZ, these impairments extend to BD+. Although our results did not support the idea that auditory hallucinations may be related to these deficits, they indicated that basic auditory deficits may be a marker for psychosis, regardless of SZ or BD diagnosis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Self-monitoring in the cerebral cortex: Neural responses to small pitch shifts in auditory feedback during speech production.

    PubMed

    Franken, Matthias K; Eisner, Frank; Acheson, Daniel J; McQueen, James M; Hagoort, Peter; Schoffelen, Jan-Mathijs

    2018-06-21

    Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Retrosplenial cortex is required for the retrieval of remote memory for auditory cues.

    PubMed

    Todd, Travis P; Mehlman, Max L; Keene, Christopher S; DeAngeli, Nicole E; Bucci, David J

    2016-06-01

    The restrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of the RSC to recently acquired auditory fear memories. Since neocortical regions have been implicated in the permanent storage of remote memories, we examined the contribution of the RSC to remotely acquired auditory fear memories. In Experiment 1, retrieval of a remotely acquired auditory fear memory was impaired when permanent lesions (either electrolytic or neurotoxic) were made several weeks after initial conditioning. In Experiment 2, using a chemogenetic approach, we observed impairments in the retrieval of remote memory for an auditory cue when the RSC was temporarily inactivated during testing. In Experiment 3, after injection of a retrograde tracer into the RSC, we observed labeled cells in primary and secondary auditory cortices, as well as the claustrum, indicating that the RSC receives direct projections from auditory regions. Overall our results indicate the RSC has a critical role in the retrieval of remotely acquired auditory fear memories, and we suggest this is related to the quality of the memory, with less precise memories being RSC dependent. © 2016 Todd et al.; Published by Cold Spring Harbor Laboratory Press.

  19. Auditory short-term memory capacity correlates with gray matter density in the left posterior STS in cognitively normal and dyslexic adults.

    PubMed

    Richardson, Fiona M; Ramsden, Sue; Ellis, Caroline; Burnett, Stephanie; Megnin, Odette; Catmur, Caroline; Schofield, Tom M; Leff, Alex P; Price, Cathy J

    2011-12-01

    A central feature of auditory STM is its item-limited processing capacity. We investigated whether auditory STM capacity correlated with regional gray and white matter in the structural MRI images from 74 healthy adults, 40 of whom had a prior diagnosis of developmental dyslexia whereas 34 had no history of any cognitive impairment. Using whole-brain statistics, we identified a region in the left posterior STS where gray matter density was positively correlated with forward digit span, backward digit span, and performance on a "spoonerisms" task that required both auditory STM and phoneme manipulation. Across tasks and participant groups, the correlation was highly significant even when variance related to reading and auditory nonword repetition was factored out. Although the dyslexics had poorer phonological skills, the effect of auditory STM capacity in the left STS was the same as in the cognitively normal group. We also illustrate that the anatomical location of this effect is in proximity to a lesion site recently associated with reduced auditory STM capacity in patients with stroke damage. This result, therefore, indicates that gray matter density in the posterior STS predicts auditory STM capacity in the healthy and damaged brain. In conclusion, we suggest that our present findings are consistent with the view that there is an overlap between the mechanisms that support language processing and auditory STM.

  20. An initial investigation into the validity of a computer-based auditory processing assessment (Feather Squadron).

    PubMed

    Barker, Matthew D; Purdy, Suzanne C

    2016-01-01

    This research investigates a novel method for identifying and measuring school-aged children with poor auditory processing through a tablet computer. Feasibility and test-retest reliability are investigated by examining the percentage of Group 1 participants able to complete the tasks and developmental effects on performance. Concurrent validity was investigated against traditional tests of auditory processing using Group 2. There were 847 students aged 5 to 13 years in group 1, and 46 aged 5 to 14 years in group 2. Some tasks could not be completed by the youngest participants. Significant correlations were found between results of most auditory processing areas assessed by the Feather Squadron test and traditional auditory processing tests. Test-retest comparisons indicated good reliability for most of the Feather Squadron assessments and some of the traditional tests. The results indicate the Feather Squadron assessment is a time-efficient, feasible, concurrently valid, and reliable approach for measuring auditory processing in school-aged children. Clinically, this may be a useful option for audiologists when performing auditory processing assessments as it is a relatively fast, engaging, and easy way to assess auditory processing abilities. Research is needed to investigate further the construct validity of this new assessment by examining the association between performance on Feather Squadron and objective evoked potential, lesion studies, and/or functional imaging measures of auditory function.

  1. Speech training alters consonant and vowel responses in multiple auditory cortex fields

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.

    2015-01-01

    Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927

  2. Can You Hear Me Now? Musical Training Shapes Functional Brain Networks for Selective Auditory Attention and Hearing Speech in Noise

    PubMed Central

    Strait, Dana L.; Kraus, Nina

    2011-01-01

    Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments. PMID:21716636

  3. Similar prevalence and magnitude of auditory-evoked and visually evoked activity in the frontal eye fields: implications for multisensory motor control.

    PubMed

    Caruso, Valeria C; Pages, Daniel S; Sommer, Marc A; Groh, Jennifer M

    2016-06-01

    Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway. Copyright © 2016 the American Physiological Society.

  4. Deficits in auditory processing contribute to impairments in vocal affect recognition in autism spectrum disorders: A MEG study.

    PubMed

    Demopoulos, Carly; Hopkins, Joyce; Kopald, Brandon E; Paulson, Kim; Doyle, Lauren; Andrews, Whitney E; Lewine, Jeffrey David

    2015-11-01

    The primary aim of this study was to examine whether there is an association between magnetoencephalography-based (MEG) indices of basic cortical auditory processing and vocal affect recognition (VAR) ability in individuals with autism spectrum disorder (ASD). MEG data were collected from 25 children/adolescents with ASD and 12 control participants using a paired-tone paradigm to measure quality of auditory physiology, sensory gating, and rapid auditory processing. Group differences were examined in auditory processing and vocal affect recognition ability. The relationship between differences in auditory processing and vocal affect recognition deficits was examined in the ASD group. Replicating prior studies, participants with ASD showed longer M1n latencies and impaired rapid processing compared with control participants. These variables were significantly related to VAR, with the linear combination of auditory processing variables accounting for approximately 30% of the variability after controlling for age and language skills in participants with ASD. VAR deficits in ASD are typically interpreted as part of a core, higher order dysfunction of the "social brain"; however, these results suggest they also may reflect basic deficits in auditory processing that compromise the extraction of socially relevant cues from the auditory environment. As such, they also suggest that therapeutic targeting of sensory dysfunction in ASD may have additional positive implications for other functional deficits. (c) 2015 APA, all rights reserved).

  5. Macrophage-Mediated Glial Cell Elimination in the Postnatal Mouse Cochlea

    PubMed Central

    Brown, LaShardai N.; Xing, Yazhi; Noble, Kenyaria V.; Barth, Jeremy L.; Panganiban, Clarisse H.; Smythe, Nancy M.; Bridges, Mary C.; Zhu, Juhong; Lang, Hainan

    2017-01-01

    Hearing relies on the transmission of auditory information from sensory hair cells (HCs) to the brain through the auditory nerve. This relay of information requires HCs to be innervated by spiral ganglion neurons (SGNs) in an exclusive manner and SGNs to be ensheathed by myelinating and non-myelinating glial cells. In the developing auditory nerve, mistargeted SGN axons are retracted or pruned and excessive cells are cleared in a process referred to as nerve refinement. Whether auditory glial cells are eliminated during auditory nerve refinement is unknown. Using early postnatal mice of either sex, we show that glial cell numbers decrease after the first postnatal week, corresponding temporally with nerve refinement in the developing auditory nerve. Additionally, expression of immune-related genes was upregulated and macrophage numbers increase in a manner coinciding with the reduction of glial cell numbers. Transient depletion of macrophages during early auditory nerve development, using transgenic CD11bDTR/EGFP mice, resulted in the appearance of excessive glial cells. Macrophage depletion caused abnormalities in myelin formation and transient edema of the stria vascularis. Macrophage-depleted mice also showed auditory function impairment that partially recovered in adulthood. These findings demonstrate that macrophages contribute to the regulation of glial cell number during postnatal development of the cochlea and that glial cells play a critical role in hearing onset and auditory nerve maturation. PMID:29375297

  6. Is auditory perceptual timing a core deficit of developmental coordination disorder?

    PubMed

    Trainor, Laurel J; Chang, Andrew; Cairney, John; Li, Yao-Chuen

    2018-05-09

    Time is an essential dimension for perceiving and processing auditory events, and for planning and producing motor behaviors. Developmental coordination disorder (DCD) is a neurodevelopmental disorder affecting 5-6% of children that is characterized by deficits in motor skills. Studies show that children with DCD have motor timing and sensorimotor timing deficits. We suggest that auditory perceptual timing deficits may also be core characteristics of DCD. This idea is consistent with evidence from several domains, (1) motor-related brain regions are often involved in auditory timing process; (2) DCD has high comorbidity with dyslexia and attention deficit hyperactivity, which are known to be associated with auditory timing deficits; (3) a few studies report deficits in auditory-motor timing among children with DCD; and (4) our preliminary behavioral and neuroimaging results show that children with DCD at age 6 and 7 have deficits in auditory time discrimination compared to typically developing children. We propose directions for investigating auditory perceptual timing processing in DCD that use various behavioral and neuroimaging approaches. From a clinical perspective, research findings can potentially benefit our understanding of the etiology of DCD, identify early biomarkers of DCD, and can be used to develop evidence-based interventions for DCD involving auditory-motor training. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of The New York Academy of Sciences.

  7. The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2016-02-03

    Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.

  8. Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI.

    PubMed

    Zhou, Sijie; Allison, Brendan Z; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing

    2016-01-01

    Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.

  9. Usage of drip drops as stimuli in an auditory P300 BCI paradigm.

    PubMed

    Huang, Minqiang; Jin, Jing; Zhang, Yu; Hu, Dewen; Wang, Xingyu

    2018-02-01

    Recently, many auditory BCIs are using beeps as auditory stimuli, while beeps sound unnatural and unpleasant for some people. It is proved that natural sounds make people feel comfortable, decrease fatigue, and improve the performance of auditory BCI systems. Drip drop is a kind of natural sounds that makes humans feel relaxed and comfortable. In this work, three kinds of drip drops were used as stimuli in an auditory-based BCI system to improve the user-friendness of the system. This study explored whether drip drops could be used as stimuli in the auditory BCI system. The auditory BCI paradigm with drip-drop stimuli, which was called the drip-drop paradigm (DP), was compared with the auditory paradigm with beep stimuli, also known as the beep paradigm (BP), in items of event-related potential amplitudes, online accuracies and scores on the likability and difficulty to demonstrate the advantages of DP. DP obtained significantly higher online accuracy and information transfer rate than the BP ( p  < 0.05, Wilcoxon signed test; p  < 0.05, Wilcoxon signed test). Besides, DP obtained higher scores on the likability with no significant difference on the difficulty ( p  < 0.05, Wilcoxon signed test). The results showed that the drip drops were reliable acoustic materials as stimuli in an auditory BCI system.

  10. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence.

    PubMed

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective.

  11. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence

    PubMed Central

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective. PMID:26348628

  12. Impairment of Auditory-Motor Timing and Compensatory Reorganization after Ventral Premotor Cortex Stimulation

    PubMed Central

    Kornysheva, Katja; Schubotz, Ricarda I.

    2011-01-01

    Integrating auditory and motor information often requires precise timing as in speech and music. In humans, the position of the ventral premotor cortex (PMv) in the dorsal auditory stream renders this area a node for auditory-motor integration. Yet, it remains unknown whether the PMv is critical for auditory-motor timing and which activity increases help to preserve task performance following its disruption. 16 healthy volunteers participated in two sessions with fMRI measured at baseline and following rTMS (rTMS) of either the left PMv or a control region. Subjects synchronized left or right finger tapping to sub-second beat rates of auditory rhythms in the experimental task, and produced self-paced tapping during spectrally matched auditory stimuli in the control task. Left PMv rTMS impaired auditory-motor synchronization accuracy in the first sub-block following stimulation (p<0.01, Bonferroni corrected), but spared motor timing and attention to task. Task-related activity increased in the homologue right PMv, but did not predict the behavioral effect of rTMS. In contrast, anterior midline cerebellum revealed most pronounced activity increase in less impaired subjects. The present findings suggest a critical role of the left PMv in feed-forward computations enabling accurate auditory-motor timing, which can be compensated by activity modulations in the cerebellum, but not in the homologue region contralateral to stimulation. PMID:21738657

  13. Functional Connectivity between Face-Movement and Speech-Intelligibility Areas during Auditory-Only Speech Perception

    PubMed Central

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026

  14. Are non-human primates capable of rhythmic entrainment? Evidence for the gradual audiomotor evolution hypothesis.

    PubMed

    Merchant, Hugo; Honing, Henkjan

    2013-01-01

    We propose a decomposition of the neurocognitive mechanisms that might underlie interval-based timing and rhythmic entrainment. Next to reviewing the concepts central to the definition of rhythmic entrainment, we discuss recent studies that suggest rhythmic entrainment to be specific to humans and a selected group of bird species, but, surprisingly, is not obvious in non-human primates. On the basis of these studies we propose the gradual audiomotor evolution hypothesis that suggests that humans fully share interval-based timing with other primates, but only partially share the ability of rhythmic entrainment (or beat-based timing). This hypothesis accommodates the fact that non-human primates (i.e., macaques) performance is comparable to humans in single interval tasks (such as interval reproduction, categorization, and interception), but show differences in multiple interval tasks (such as rhythmic entrainment, synchronization, and continuation). Furthermore, it is in line with the observation that macaques can, apparently, synchronize in the visual domain, but show less sensitivity in the auditory domain. And finally, while macaques are sensitive to interval-based timing and rhythmic grouping, the absence of a strong coupling between the auditory and motor system of non-human primates might be the reason why macaques cannot rhythmically entrain in the way humans do.

  15. Top-down (Prior Knowledge) and Bottom-up (Perceptual Modality) Influences on Spontaneous Interpersonal Synchronization.

    PubMed

    Gipson, Christina L; Gorman, Jamie C; Hessler, Eric E

    2016-04-01

    Coordination with others is such a fundamental part of human activity that it can happen unintentionally. This unintentional coordination can manifest as synchronization and is observed in physical and human systems alike. We investigated the role of top-down influences (prior knowledge of the perceptual modality their partner is using) and bottom-up factors (perceptual modality combination) on spontaneous interpersonal synchronization. We examine this phenomena with respect to two different theoretical perspectives that differently emphasize top-down and bottom-up factors in interpersonal synchronization: joint-action/shared cognition theories and ecological-interactive theories. In an empirical study twelve dyads performed a finger oscillation task while attending to each other's movements through either visual, auditory, or visual and auditory perceptual modalities. Half of the participants were given prior knowledge of their partner's perceptual capabilities for coordinating across these different perceptual modality combinations. We found that the effect of top-down influence depends on the perceptual modality combination between two individuals. When people used the same perceptual modalities, top-down influence resulted in less synchronization and when people used different perceptual modalities, top-down influence resulted in more synchronization. Furthermore, persistence in the change in behavior as a result of having perceptual information about each other ('social memory') was stronger when this top-down influence was present.

  16. Gain control in the sonar of odontocetes.

    PubMed

    Ya Supin, Alexander; Nachtigall, Paul E

    2013-06-01

    The sonar of odontocetes processes echo-signals within a wide range of echo levels. The level of echoes varies widely by tens of decibels depending on the level of the emitted sonar pulse, the target strength, the distance to the target, and the sound absorption by the water media. The auditory system of odontocetes must be capable of effective perception, analysis, and discrimination of echo-signals within all this variability. The sonar of odontocetes has several mechanisms to compensate for the echo-level variation (gain control). To date, several mechanisms of the biosonar gain control have been revealed in odontocetes: (1) adjustment of emitted sonar pulse levels (the longer the distance to the target, the higher the level of the emitted pulse), (2) short-term variation of hearing sensitivity based on forward masking of the echo by the preceding self-heard emitted pulse and subsequent release from the masking, and (3) active long-term control of hearing sensitivity. Recent investigations with the use of the auditory evoked-potential technique have demonstrated that these mechanisms effectively minimize the variation of the response to the echo when either the emitted sonar pulse level, or the target distance, or both vary within a wide range. A short review of these data is presented herein.

  17. Hearing in the sea otter (Enhydra lutris): auditory profiles for an amphibious marine carnivore.

    PubMed

    Ghoul, Asila; Reichmuth, Colleen

    2014-11-01

    In this study we examine the auditory capabilities of the sea otter (Enhydra lutris), an amphibious marine mammal that remains virtually unstudied with respect to its sensory biology. We trained an adult male sea otter to perform a psychophysical task in an acoustic chamber and at an underwater apparatus. Aerial and underwater audiograms were constructed from detection thresholds for narrowband signals measured in quiet conditions at frequencies from 0.125-40 kHz. Aerial hearing thresholds were also measured in the presence of octave-band masking noise centered at eight signal frequencies (0.25-22.6 kHz) so that critical ratios could be determined. The aerial audiogram of the sea otter resembled that of sea lions and showed a reduction in low-frequency sensitivity relative to terrestrial mustelids. Best sensitivity was -1 dB re 20 µPa at 8 kHz. Under water, hearing sensitivity was significantly reduced when compared to sea lions and other pinniped species, demonstrating that sea otter hearing is primarily adapted to receive airborne sounds. Critical ratios were more than 10 dB higher than those measured for pinnipeds, suggesting that sea otters are less efficient than other marine carnivores at extracting acoustic signals from background noise, especially at frequencies below 2 kHz.

  18. Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy

    PubMed Central

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life. PMID:24586651

  19. Sensing in a noisy world: lessons from auditory specialists, echolocating bats.

    PubMed

    Corcoran, Aaron J; Moss, Cynthia F

    2017-12-15

    All animals face the essential task of extracting biologically meaningful sensory information from the 'noisy' backdrop of their environments. Here, we examine mechanisms used by echolocating bats to localize objects, track small prey and communicate in complex and noisy acoustic environments. Bats actively control and coordinate both the emission and reception of sound stimuli through integrated sensory and motor mechanisms that have evolved together over tens of millions of years. We discuss how bats behave in different ecological scenarios, including detecting and discriminating target echoes from background objects, minimizing acoustic interference from competing conspecifics and overcoming insect noise. Bats tackle these problems by deploying a remarkable array of auditory behaviors, sometimes in combination with the use of other senses. Behavioral strategies such as ceasing sonar call production and active jamming of the signals of competitors provide further insight into the capabilities and limitations of echolocation. We relate these findings to the broader topic of how animals extract relevant sensory information in noisy environments. While bats have highly refined abilities for operating under noisy conditions, they face the same challenges encountered by many other species. We propose that the specialized sensory mechanisms identified in bats are likely to occur in analogous systems across the animal kingdom. © 2017. Published by The Company of Biologists Ltd.

  20. A magnetoencephalography study of multi-modal processing of pain anticipation in primary sensory cortices.

    PubMed

    Gopalakrishnan, R; Burgess, R C; Plow, E B; Floden, D P; Machado, A G

    2015-09-24

    Pain anticipation plays a critical role in pain chronification and results in disability due to pain avoidance. It is important to understand how different sensory modalities (auditory, visual or tactile) may influence pain anticipation as different strategies could be applied to mitigate anticipatory phenomena and chronification. In this study, using a countdown paradigm, we evaluated with magnetoencephalography the neural networks associated with pain anticipation elicited by different sensory modalities in normal volunteers. When encountered with well-established cues that signaled pain, visual and somatosensory cortices engaged the pain neuromatrix areas early during the countdown process, whereas the auditory cortex displayed delayed processing. In addition, during pain anticipation, the visual cortex displayed independent processing capabilities after learning the contextual meaning of cues from associative and limbic areas. Interestingly, cross-modal activation was also evident and strong when visual and tactile cues signaled upcoming pain. Dorsolateral prefrontal cortex and mid-cingulate cortex showed significant activity during pain anticipation regardless of modality. Our results show pain anticipation is processed with great time efficiency by a highly specialized and hierarchical network. The highest degree of higher-order processing is modulated by context (pain) rather than content (modality) and rests within the associative limbic regions, corroborating their intrinsic role in chronification. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  1. Musical Imagery Involves Wernicke's Area in Bilateral and Anti-Correlated Network Interactions in Musicians.

    PubMed

    Zhang, Yizhen; Chen, Gang; Wen, Haiguang; Lu, Kun-Han; Liu, Zhongming

    2017-12-06

    Musical imagery is the human experience of imagining music without actually hearing it. The neural basis of this mental ability is unclear, especially for musicians capable of engaging in accurate and vivid musical imagery. Here, we created a visualization of an 8-minute symphony as a silent movie and used it as real-time cue for musicians to continuously imagine the music for repeated and synchronized sessions during functional magnetic resonance imaging (fMRI). The activations and networks evoked by musical imagery were compared with those elicited by the subjects directly listening to the same music. Musical imagery and musical perception resulted in overlapping activations at the anterolateral belt and Wernicke's area, where the responses were correlated with the auditory features of the music. Whereas Wernicke's area interacted within the intrinsic auditory network during musical perception, it was involved in much more complex networks during musical imagery, showing positive correlations with the dorsal attention network and the motor-control network and negative correlations with the default-mode network. Our results highlight the important role of Wernicke's area in forming vivid musical imagery through bilateral and anti-correlated network interactions, challenging the conventional view of segregated and lateralized processing of music versus language.

  2. Dissociable neural response signatures for slow amplitude and frequency modulation in human auditory cortex.

    PubMed

    Henry, Molly J; Obleser, Jonas

    2013-01-01

    Natural auditory stimuli are characterized by slow fluctuations in amplitude and frequency. However, the degree to which the neural responses to slow amplitude modulation (AM) and frequency modulation (FM) are capable of conveying independent time-varying information, particularly with respect to speech communication, is unclear. In the current electroencephalography (EEG) study, participants listened to amplitude- and frequency-modulated narrow-band noises with a 3-Hz modulation rate, and the resulting neural responses were compared. Spectral analyses revealed similar spectral amplitude peaks for AM and FM at the stimulation frequency (3 Hz), but amplitude at the second harmonic frequency (6 Hz) was much higher for FM than for AM. Moreover, the phase delay of neural responses with respect to the full-band stimulus envelope was shorter for FM than for AM. Finally, the critical analysis involved classification of single trials as being in response to either AM or FM based on either phase or amplitude information. Time-varying phase, but not amplitude, was sufficient to accurately classify AM and FM stimuli based on single-trial neural responses. Taken together, the current results support the dissociable nature of cortical signatures of slow AM and FM. These cortical signatures potentially provide an efficient means to dissect simultaneously communicated slow temporal and spectral information in acoustic communication signals.

  3. Dissociable Neural Response Signatures for Slow Amplitude and Frequency Modulation in Human Auditory Cortex

    PubMed Central

    Henry, Molly J.; Obleser, Jonas

    2013-01-01

    Natural auditory stimuli are characterized by slow fluctuations in amplitude and frequency. However, the degree to which the neural responses to slow amplitude modulation (AM) and frequency modulation (FM) are capable of conveying independent time-varying information, particularly with respect to speech communication, is unclear. In the current electroencephalography (EEG) study, participants listened to amplitude- and frequency-modulated narrow-band noises with a 3-Hz modulation rate, and the resulting neural responses were compared. Spectral analyses revealed similar spectral amplitude peaks for AM and FM at the stimulation frequency (3 Hz), but amplitude at the second harmonic frequency (6 Hz) was much higher for FM than for AM. Moreover, the phase delay of neural responses with respect to the full-band stimulus envelope was shorter for FM than for AM. Finally, the critical analysis involved classification of single trials as being in response to either AM or FM based on either phase or amplitude information. Time-varying phase, but not amplitude, was sufficient to accurately classify AM and FM stimuli based on single-trial neural responses. Taken together, the current results support the dissociable nature of cortical signatures of slow AM and FM. These cortical signatures potentially provide an efficient means to dissect simultaneously communicated slow temporal and spectral information in acoustic communication signals. PMID:24205309

  4. Acetylcholinesterase Inhibition and Information Processing in the Auditory Cortex

    DTIC Science & Technology

    1986-04-30

    9,24,29,30), or for causing auditory hallucinations (2,23,31,32). Thus, compounds which alter cho- linergic transmission, in particular anticholinesterases...the upper auditory system. Thus, attending to and understanding verbal messages in humans, irrespective of the particular voice which speaks them, may...00, AD ACETYLCHOLINESTERASE INHIBITION AND INFORMATION PROCESSING IN THE AUDITORY CORTEX ANNUAL SUMMARY REPORT DTIC ELECTENORMAN M

  5. The Effect of Visual and Auditory Enhancements on Excitability of the Primary Motor Cortex during Motor Imagery: A Pilot Study

    ERIC Educational Resources Information Center

    Ikeda, Kohei; Higashi, Toshio; Sugawara, Kenichi; Tomori, Kounosuke; Kinoshita, Hiroshi; Kasai, Tatsuya

    2012-01-01

    The effect of visual and auditory enhancements of finger movement on corticospinal excitability during motor imagery (MI) was investigated using the transcranial magnetic stimulation technique. Motor-evoked potentials were elicited from the abductor digit minimi muscle during MI with auditory, visual and, auditory and visual information, and no…

  6. Reduced auditory efferent activity in childhood selective mutism.

    PubMed

    Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava

    2004-06-01

    Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.

  7. Awake craniotomy for assisting placement of auditory brainstem implant in NF2 patients.

    PubMed

    Zhou, Qiangyi; Yang, Zhijun; Wang, Zhenmin; Wang, Bo; Wang, Xingchao; Zhao, Chi; Zhang, Shun; Wu, Tao; Li, Peng; Li, Shiwei; Zhao, Fu; Liu, Pinan

    2018-06-01

    Auditory brainstem implants (ABIs) may be the only opportunity for patients with NF2 to regain some sense of hearing sensation. However, only a very small number of individuals achieved open-set speech understanding and high sentence scores. Suboptimal placement of the ABI electrode array over the cochlear nucleus may be one of main factors for poor auditory performance. In the current study, we present a method of awake craniotomy to assist with ABI placement. Awake surgery and hearing test via the retrosigmoid approach were performed for vestibular schwannoma resections and auditory brainstem implantations in four patients with NF2. Auditory outcomes and complications were assessed postoperatively. Three of 4 patients who underwent awake craniotomy during ABI surgery received reproducible auditory sensations intraoperatively. Satisfactory numbers of effective electrodes, threshold levels and distinct pitches were achieved in the wake-up hearing test. In addition, relatively few electrodes produced non-auditory percepts. There was no serious complication attributable to the ABI or awake craniotomy. It is safe and well tolerated for neurofibromatosis type 2 (NF2) patients using awake craniotomy during auditory brainstem implantation. This method can potentially improve the localization accuracy of the cochlear nucleus during surgery.

  8. Auditory Memory Distortion for Spoken Prose

    PubMed Central

    Hutchison, Joanna L.; Hubbard, Timothy L.; Ferrandino, Blaise; Brigante, Ryan; Wright, Jamie M.; Rypma, Bart

    2013-01-01

    Observers often remember a scene as containing information that was not presented but that would have likely been located just beyond the observed boundaries of the scene. This effect is called boundary extension (BE; e.g., Intraub & Richardson, 1989). Previous studies have observed BE in memory for visual and haptic stimuli, and the present experiments examined whether BE occurred in memory for auditory stimuli (prose, music). Experiments 1 and 2 varied the amount of auditory content to be remembered. BE was not observed, but when auditory targets contained more content, boundary restriction (BR) occurred. Experiment 3 presented auditory stimuli with less content and BR also occurred. In Experiment 4, white noise was added to stimuli with less content to equalize the durations of auditory stimuli, and BR still occurred. Experiments 5 and 6 presented trained stories and popular music, and BR still occurred. This latter finding ruled out the hypothesis that the lack of BE in Experiments 1–4 reflected a lack of familiarity with the stimuli. Overall, memory for auditory content exhibited BR rather than BE, and this pattern was stronger if auditory stimuli contained more content. Implications for the understanding of general perceptual processing and directions for future research are discussed. PMID:22612172

  9. Perceptual Learning and Auditory Training in Cochlear Implant Recipients

    PubMed Central

    Fu, Qian-Jie; Galvin, John J.

    2007-01-01

    Learning electrically stimulated speech patterns can be a new and difficult experience for cochlear implant (CI) recipients. Recent studies have shown that most implant recipients at least partially adapt to these new patterns via passive, daily-listening experiences. Gradually introducing a speech processor parameter (eg, the degree of spectral mismatch) may provide for more complete and less stressful adaptation. Although the implant device restores hearing sensation and the continued use of the implant provides some degree of adaptation, active auditory rehabilitation may be necessary to maximize the benefit of implantation for CI recipients. Currently, there are scant resources for auditory rehabilitation for adult, postlingually deafened CI recipients. We recently developed a computer-assisted speech-training program to provide the means to conduct auditory rehabilitation at home. The training software targets important acoustic contrasts among speech stimuli, provides auditory and visual feedback, and incorporates progressive training techniques, thereby maintaining recipients’ interest during the auditory training exercises. Our recent studies demonstrate the effectiveness of targeted auditory training in improving CI recipients’ speech and music perception. Provided with an inexpensive and effective auditory training program, CI recipients may find the motivation and momentum to get the most from the implant device. PMID:17709574

  10. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  11. Opposite brain laterality in analogous auditory and visual tests.

    PubMed

    Oltedal, Leif; Hugdahl, Kenneth

    2017-11-01

    Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.

  12. Central auditory processing disorder (CAPD) in children with specific language impairment (SLI). Central auditory tests.

    PubMed

    Dlouha, Olga; Novak, Alexej; Vokral, Jan

    2007-06-01

    The aim of this project is to use central auditory tests for diagnosis of central auditory processing disorder (CAPD) in children with specific language impairment (SLI), in order to confirm relationship between speech-language impairment and central auditory processing. We attempted to establish special dichotic binaural tests in Czech language modified for younger children. Tests are based on behavioral audiometry using dichotic listening (different auditory stimuli that presented to each ear simultaneously). The experimental tasks consisted of three auditory measures (test 1-3)-dichotic listening of two-syllable words presented like binaural interaction tests. Children with SLI are unable to create simple sentences from two words that are heard separately but simultaneously. Results in our group of 90 pre-school children (6-7 years old) confirmed integration deficit and problems with quality of short-term memory. Average rate of success of children with specific language impairment was 56% in test 1, 64% in test 2 and 63% in test 3. Results of control group: 92% in test 1, 93% in test 2 and 92% in test 3 (p<0.001). Our results indicate the relationship between disorders of speech-language perception and central auditory processing disorders.

  13. Missing a trick: Auditory load modulates conscious awareness in audition.

    PubMed

    Fairnie, Jake; Moore, Brian C J; Remington, Anna

    2016-07-01

    In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning

    PubMed Central

    Strait, Dana L.; Kraus, Nina

    2013-01-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians’ subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model by which to study mechanisms of experience-dependent changes in auditory function in humans. PMID:23988583

  15. Combined Auditory and Vibrotactile Feedback for Human-Machine-Interface Control.

    PubMed

    Thorp, Elias B; Larson, Eric; Stepp, Cara E

    2014-01-01

    The purpose of this study was to determine the effect of the addition of binary vibrotactile stimulation to continuous auditory feedback (vowel synthesis) for human-machine interface (HMI) control. Sixteen healthy participants controlled facial surface electromyography to achieve 2-D targets (vowels). Eight participants used only real-time auditory feedback to locate targets whereas the other eight participants were additionally alerted to having achieved targets with confirmatory vibrotactile stimulation at the index finger. All participants trained using their assigned feedback modality (auditory alone or combined auditory and vibrotactile) over three sessions on three days and completed a fourth session on the third day using novel targets to assess generalization. Analyses of variance performed on the 1) percentage of targets reached and 2) percentage of trial time at the target revealed a main effect for feedback modality: participants using combined auditory and vibrotactile feedback performed significantly better than those using auditory feedback alone. No effect was found for session or the interaction of feedback modality and session, indicating a successful generalization to novel targets but lack of improvement over training sessions. Future research is necessary to determine the cognitive cost associated with combined auditory and vibrotactile feedback during HMI control.

  16. Auditory conflict and congruence in frontotemporal dementia.

    PubMed

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  17. From Perception to Metacognition: Auditory and Olfactory Functions in Early Blind, Late Blind, and Sighted Individuals

    PubMed Central

    Cornell Kärnekull, Stina; Arshamian, Artin; Nilsson, Mats E.; Larsson, Maria

    2016-01-01

    Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15), late blind (n = 15), and sighted (n = 30) participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA) showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity. PMID:27729884

  18. Auditory hallucinations and PTSD in ex-POWS.

    PubMed

    Crompton, Laura; Lahav, Yael; Solomon, Zahava

    2017-01-01

    Literature has suggested that auditory hallucinations might be prevalent in the general population and could be linked to the experience of trauma. This prospective study examines the prevalence of auditory hallucinations in trauma survivors and its association with posttraumatic stress disorder (PTSD) symptoms, over time. Former prisoners of war (ex-POWs) from the 1973 Yom Kippur War (n = 99) with and without PTSD and comparable veterans (n = 103) were assessed twice, in 1991 (T1) and 2003 (T2) in regard to auditory hallucinations and PTSD symptoms. Findings indicated that ex-POWs who suffered from PTSD reported higher levels of auditory hallucinations at T2 as well as increased hallucinations over time, compared to ex-POWs without PTSD and combatants who did not endure captivity. The relation between PTSD and auditory hallucinations was unidirectional, so that the PTSD overall score at T1 predicted an increase in auditory hallucinations between T1 and T2, but not vice versa. Assessing the role of PTSD clusters in predicting hallucinations revealed that intrusion symptoms had a unique contribution, compared to avoidance and hyperarousal symptoms. The findings suggest that auditory hallucinations might be a consequence of the posttraumatic reaction among veterans.

  19. Musical Experience, Sensorineural Auditory Processing, and Reading Subskills in Adults.

    PubMed

    Tichko, Parker; Skoe, Erika

    2018-04-27

    Developmental research suggests that sensorineural auditory processing, reading subskills (e.g., phonological awareness and rapid naming), and musical experience are related during early periods of reading development. Interestingly, recent work suggests that these relations may extend into adulthood, with indices of sensorineural auditory processing relating to global reading ability. However, it is largely unknown whether sensorineural auditory processing relates to specific reading subskills, such as phonological awareness and rapid naming, as well as musical experience in mature readers. To address this question, we recorded electrophysiological responses to a repeating click (auditory stimulus) in a sample of adult readers. We then investigated relations between electrophysiological responses to sound, reading subskills, and musical experience in this same set of adult readers. Analyses suggest that sensorineural auditory processing, reading subskills, and musical experience are related in adulthood, with faster neural conduction times and greater musical experience associated with stronger rapid-naming skills. These results are similar to the developmental findings that suggest reading subskills are related to sensorineural auditory processing and musical experience in children.

  20. Behavioral Measures of Auditory Streaming in Ferrets (Mustela putorius)

    PubMed Central

    Ma, Ling; Yin, Pingbo; Micheyl, Christophe; Oxenham, Andrew J.; Shamma, Shihab A.

    2015-01-01

    An important aspect of the analysis of auditory “scenes” relates to the perceptual organization of sound sequences into auditory “streams.” In this study, we adapted two auditory perception tasks, used in recent human psychophysical studies, to obtain behavioral measures of auditory streaming in ferrets (Mustela putorius). One task involved the detection of shifts in the frequency of tones within an alternating tone sequence. The other task involved the detection of a stream of regularly repeating target tones embedded within a randomly varying multitone background. In both tasks, performance was measured as a function of various stimulus parameters, which previous psychophysical studies in humans have shown to influence auditory streaming. Ferret performance in the two tasks was found to vary as a function of these parameters in a way that is qualitatively consistent with the human data. These results suggest that auditory streaming occurs in ferrets, and that the two tasks described here may provide a valuable tool in future behavioral and neurophysiological studies of the phenomenon. PMID:20695663

  1. Characterization of auditory synaptic inputs to gerbil perirhinal cortex

    PubMed Central

    Kotak, Vibhakar C.; Mowery, Todd M.; Sanes, Dan H.

    2015-01-01

    The representation of acoustic cues involves regions downstream from the auditory cortex (ACx). One such area, the perirhinal cortex (PRh), processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG) and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of type A gamma-aminobutyric acid (GABA-A) receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the ACx. PMID:26321918

  2. Speech perception in individuals with auditory dys-synchrony.

    PubMed

    Kumar, U A; Jayaram, M

    2011-03-01

    This study aimed to evaluate the effect of lengthening the transition duration of selected speech segments upon the perception of those segments in individuals with auditory dys-synchrony. Thirty individuals with auditory dys-synchrony participated in the study, along with 30 age-matched normal hearing listeners. Eight consonant-vowel syllables were used as auditory stimuli. Two experiments were conducted. Experiment one measured the 'just noticeable difference' time: the smallest prolongation of the speech sound transition duration which was noticeable by the subject. In experiment two, speech sounds were modified by lengthening the transition duration by multiples of the just noticeable difference time, and subjects' speech identification scores for the modified speech sounds were assessed. Subjects with auditory dys-synchrony demonstrated poor processing of temporal auditory information. Lengthening of speech sound transition duration improved these subjects' perception of both the placement and voicing features of the speech syllables used. These results suggest that innovative speech processing strategies which enhance temporal cues may benefit individuals with auditory dys-synchrony.

  3. Training Humans to Categorize Monkey Calls: Auditory Feature- and Category-Selective Neural Tuning Changes.

    PubMed

    Jiang, Xiong; Chevillet, Mark A; Rauschecker, Josef P; Riesenhuber, Maximilian

    2018-04-18

    Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Electrophysiological evidence for a general auditory prediction deficit in adults who stutter

    PubMed Central

    Daliri, Ayoub; Max, Ludo

    2015-01-01

    We previously found that stuttering individuals do not show the typical auditory modulation observed during speech planning in nonstuttering individuals. In this follow-up study, we further elucidate this difference by investigating whether stuttering speakers’ atypical auditory modulation is observed only when sensory predictions are based on movement planning or also when predictable auditory input is not a consequence of one’s own actions. We recorded 10 stuttering and 10 nonstuttering adults’ auditory evoked potentials in response to random probe tones delivered while anticipating either speaking aloud or hearing one’s own speech played back and in a control condition without auditory input (besides probe tones). N1 amplitude of nonstuttering speakers was reduced prior to both speaking and hearing versus the control condition. Stuttering speakers, however, showed no N1 amplitude reduction in either the speaking or hearing condition as compared with control. Thus, findings suggest that stuttering speakers have general auditory prediction difficulties. PMID:26335995

  5. The role of temporal structure in the investigation of sensory memory, auditory scene analysis, and speech perception: a healthy-aging perspective.

    PubMed

    Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David

    2015-02-01

    Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals--over a range of time scales from milliseconds to seconds--renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Modulation of auditory stimulus processing by visual spatial or temporal cue: an event-related potentials study.

    PubMed

    Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong

    2013-10-11

    Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Auditory temporal processing skills in musicians with dyslexia.

    PubMed

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.

  8. What You See Isn’t Always What You Get: Auditory Word Signals Trump Consciously Perceived Words in Lexical Access

    PubMed Central

    Ostrand, Rachel; Blumstein, Sheila E.; Ferreira, Victor S.; Morgan, James L.

    2016-01-01

    Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another. PMID:27011021

  9. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    NASA Astrophysics Data System (ADS)

    Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica

    2005-12-01

    This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  10. The role of temporal structure in the investigation of sensory memory, auditory scene analysis, and speech perception: A healthy-aging perspective

    PubMed Central

    Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David

    2014-01-01

    Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals - over a range of time scales from milliseconds to seconds - renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own ‚privileged‘ temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. PMID:24956028

  11. An association between auditory-visual synchrony processing and reading comprehension: Behavioral and electrophysiological evidence

    PubMed Central

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2016-01-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060

  12. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    PubMed

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  13. Neuropsychopharmacology of auditory hallucinations: insights from pharmacological functional MRI and perspectives for future research.

    PubMed

    Johnsen, Erik; Hugdahl, Kenneth; Fusar-Poli, Paolo; Kroken, Rune A; Kompus, Kristiina

    2013-01-01

    Experiencing auditory verbal hallucinations is a prominent symptom in schizophrenia that also occurs in subjects at enhanced risk for psychosis and in the general population. Drug treatment of auditory hallucinations is challenging, because the current understanding is limited with respect to the neural mechanisms involved, as well as how CNS drugs, such as antipsychotics, influence the subjective experience and neurophysiology of hallucinations. In this article, the authors review studies of the effect of antipsychotic medication on brain activation as measured with functional MRI in patients with auditory verbal hallucinations. First, the authors examine the neural correlates of ongoing auditory hallucinations. Then, the authors critically discuss studies addressing the antipsychotic effect on the neural correlates of complex cognitive tasks. Current evidence suggests that blood oxygen level-dependant effects of antipsychotic drugs reflect specific, regional effects but studies on the neuropharmacology of auditory hallucinations are scarce. Future directions for pharmacological neuroimaging of auditory hallucinations are discussed.

  14. Hearing in Insects.

    PubMed

    Göpfert, Martin C; Hennig, R Matthias

    2016-01-01

    Insect hearing has independently evolved multiple times in the context of intraspecific communication and predator detection by transforming proprioceptive organs into ears. Research over the past decade, ranging from the biophysics of sound reception to molecular aspects of auditory transduction to the neuronal mechanisms of auditory signal processing, has greatly advanced our understanding of how insects hear. Apart from evolutionary innovations that seem unique to insect hearing, parallels between insect and vertebrate auditory systems have been uncovered, and the auditory sensory cells of insects and vertebrates turned out to be evolutionarily related. This review summarizes our current understanding of insect hearing. It also discusses recent advances in insect auditory research, which have put forward insect auditory systems for studying biological aspects that extend beyond hearing, such as cilium function, neuronal signal computation, and sensory system evolution.

  15. Parallel perceptual enhancement and hierarchic relevance evaluation in an audio-visual conjunction task.

    PubMed

    Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E

    2008-10-21

    Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.

  16. Extremely high frequency sensitivity in a 'simple' ear.

    PubMed

    Moir, Hannah M; Jackson, Joseph C; Windmill, James F C

    2013-08-23

    An evolutionary war is being played out between the bat, which uses ultrasonic calls to locate insect prey, and the moth, which uses microscale ears to listen for the approaching bat. While the highest known frequency of bat echolocation calls is 212 kHz, the upper limit of moth hearing is considered much lower. Here, we show that the greater wax moth, Galleria mellonella, is capable of hearing ultrasonic frequencies approaching 300 kHz; the highest frequency sensitivity of any animal. With auditory frequency sensitivity that is unprecedented in the animal kingdom, the greater wax moth is ready and armed for any echolocation call adaptations made by the bat in the on-going bat-moth evolutionary war.

  17. Short term hearing loss in general aviation operations, phase 1, part 1

    NASA Technical Reports Server (NTRS)

    Parker, J. F., Jr.

    1972-01-01

    The effects of light aircraft noise on six subjects during flight operations were investigated. The noise environment in the Piper Apache light aircraft was found to be capable of producing hearing threshold shifts. The following are the principal findings and conclusions: (1) Through most of the frequency range for which measurements were taken (500 to 6000 Hz), there was a regular progression showing increased loss of auditory acuity as a function of increased exposure time. (2) Extensive variability was found in the results among subjects, and in the measured loss at discrete frequencies for each subject. (3) The principal loss of hearing occurred at the low frequencies, around 500 Hz.

  18. Auditory interfaces: The human perceiver

    NASA Technical Reports Server (NTRS)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  19. Alteration of glycine receptor immunoreactivity in the auditory brainstem of mice following three months of exposure to radiofrequency radiation at SAR 4.0 W/kg.

    PubMed

    Maskey, Dhiraj; Kim, Hyung Gun; Suh, Myung-Whan; Roh, Gu Seob; Kim, Myeung Ju

    2014-08-01

    The increasing use of mobile communication has triggered an interest in its possible effects on the regulation of neurotransmitter signals. Due to the close proximity of mobile phones to hearing-related brain regions during usage, its use may lead to a decrease in the ability to segregate sounds, leading to serious auditory dysfunction caused by the prolonged exposure to radiofrequency (RF) radiation. The interplay among auditory processing, excitation and inhibitory molecule interactions plays a major role in auditory function. In particular, inhibitory molecules, such a glycine, are predominantly localized in the auditory brainstem. However, the effects of exposure to RF radiation on auditory function have not been reported to date. Thus, the aim of the present study was to investigate the effects of exposure to RF radiation on glycine receptor (GlyR) immunoreactivity (IR) in the auditory brainstem region at 835 MHz with a specific absorption rate of 4.0 W/kg for three months using free-floating immunohistochemistry. Compared with the sham control (SC) group, a significant loss of staining intensity of neuropils and cells in the different subdivisions of the auditory brainstem regions was observed in the mice exposed to RF radiation (E4 group). A decrease in the number of GlyR immunoreactive cells was also noted in the cochlear nuclear complex [anteroventral cochlear nucleus (AVCN), 31.09%; dorsal cochlear nucleus (DCN), 14.08%; posteroventral cochlear nucleus (PVCN), 32.79%] and the superior olivary complex (SOC) [lateral superior olivary nucleus (LSO), 36.85%; superior paraolivary nucleus (SPN), 24.33%, medial superior olivary nucleus (MSO), 23.23%; medial nucleus of the trapezoid body (MNTB), 10.15%] of the mice in the E4 group. Auditory brainstem response (ABR) analysis also revealed a significant threshold elevation of in the exposed (E4) group, which may be associated with auditory dysfunction. The present study suggests that the auditory brainstem region is susceptible to chronic exposure to RF radiation, which may affect the function of the central auditory system.

  20. Neurofeedback-Based Enhancement of Single-Trial Auditory Evoked Potentials: Treatment of Auditory Verbal Hallucinations in Schizophrenia.

    PubMed

    Rieger, Kathryn; Rarra, Marie-Helene; Diaz Hernandez, Laura; Hubl, Daniela; Koenig, Thomas

    2018-03-01

    Auditory verbal hallucinations depend on a broad neurobiological network ranging from the auditory system to language as well as memory-related processes. As part of this, the auditory N100 event-related potential (ERP) component is attenuated in patients with schizophrenia, with stronger attenuation occurring during auditory verbal hallucinations. Changes in the N100 component assumingly reflect disturbed responsiveness of the auditory system toward external stimuli in schizophrenia. With this premise, we investigated the therapeutic utility of neurofeedback training to modulate the auditory-evoked N100 component in patients with schizophrenia and associated auditory verbal hallucinations. Ten patients completed electroencephalography neurofeedback training for modulation of N100 (treatment condition) or another unrelated component, P200 (control condition). On a behavioral level, only the control group showed a tendency for symptom improvement in the Positive and Negative Syndrome Scale total score in a pre-/postcomparison ( t (4) = 2.71, P = .054); however, no significant differences were found in specific hallucination related symptoms ( t (7) = -0.53, P = .62). There was no significant overall effect of neurofeedback training on ERP components in our paradigm; however, we were able to identify different learning patterns, and found a correlation between learning and improvement in auditory verbal hallucination symptoms across training sessions ( r = 0.664, n = 9, P = .05). This effect results, with cautious interpretation due to the small sample size, primarily from the treatment group ( r = 0.97, n = 4, P = .03). In particular, a within-session learning parameter showed utility for predicting symptom improvement with neurofeedback training. In conclusion, patients with schizophrenia and associated auditory verbal hallucinations who exhibit a learning pattern more characterized by within-session aptitude may benefit from electroencephalography neurofeedback. Furthermore, independent of the training group, a significant spatial pre-post difference was found in the event-related component P200 ( P = .04).

  1. Effects of hearing aids in the balance, quality of life and fear to fall in elderly people with sensorineural hearing loss

    PubMed Central

    Lacerda, Clara Fonseca; Silva, Luciana Oliveira e; de Tavares Canto, Roberto Sérgio; Cheik, Nadia Carla

    2012-01-01

    Summary Introduction: The aging process provokes structural modifications and functional to it greets, compromising the postural control and central processing. Studies have boarded the necessity to identify to the harmful factors of risk to aged the auditory health and security in stricken aged by auditory deficits and with alterations of balance. Objective: To evaluate the effect of auditory prosthesis in the quality of life, the balance and the fear of fall in aged with bilateral auditory loss. Method: Carried through clinical and experimental study with 56 aged ones with sensorineural auditory loss, submitted to the use of auditory prosthesis of individual sonorous amplification (AASI). The aged ones had answered to the questionnaires of quality of life Short Form Health Survey (SF-36), Falls Efficacy International Scale- (FES-I) and the test of Berg Balance Scale (BBS). After 4 months, the aged ones that they adapted to the use of the AASI had been reevaluated. Results: It had 50% of adaptation of the aged ones to the AASI. It was observed that the masculine sex had greater difficulty in adapting to the auditory device and that the variable age, degree of loss, presence of humming and vertigo had not intervened with the adaptation to auditory prosthesis. It had improvement of the quality of life in the dominance of the State General Health (EGS) and Functional Capacity (CF) and of the humming, as well as the increase of the auto-confidence after adaptation of auditory prosthesis. Conclusion: The use of auditory prosthesis provided the improvement of the domains of the quality of life, what it reflected consequently in one better auto-confidence and in the long run in the reduction of the fear of fall in aged with sensorineural auditory loss. PMID:25991930

  2. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    PubMed

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Effect of age at cochlear implantation on auditory and speech development of children with auditory neuropathy spectrum disorder.

    PubMed

    Liu, Yuying; Dong, Ruijuan; Li, Yuling; Xu, Tianqiu; Li, Yongxin; Chen, Xueqing; Gong, Shusheng

    2014-12-01

    To evaluate the auditory and speech abilities in children with auditory neuropathy spectrum disorder (ANSD) after cochlear implantation (CI) and determine the role of age at implantation. Ten children participated in this retrospective case series study. All children had evidence of ANSD. All subjects had no cochlear nerve deficiency on magnetic resonance imaging and had used the cochlear implants for a period of 12-84 months. We divided our children into two groups: children who underwent implantation before 24 months of age and children who underwent implantation after 24 months of age. Their auditory and speech abilities were evaluated using the following: behavioral audiometry, the Categories of Auditory Performance (CAP), the Meaningful Auditory Integration Scale (MAIS), the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), the Standard-Chinese version of the Monosyllabic Lexical Neighborhood Test (LNT), the Multisyllabic Lexical Neighborhood Test (MLNT), the Speech Intelligibility Rating (SIR) and the Meaningful Use of Speech Scale (MUSS). All children showed progress in their auditory and language abilities. The 4-frequency average hearing level (HL) (500Hz, 1000Hz, 2000Hz and 4000Hz) of aided hearing thresholds ranged from 17.5 to 57.5dB HL. All children developed time-related auditory perception and speech skills. Scores of children with ANSD who received cochlear implants before 24 months tended to be better than those of children who received cochlear implants after 24 months. Seven children completed the Mandarin Lexical Neighborhood Test. Approximately half of the children showed improved open-set speech recognition. Cochlear implantation is helpful for children with ANSD and may be a good optional treatment for many ANSD children. In addition, children with ANSD fitted with cochlear implants before 24 months tended to acquire auditory and speech skills better than children fitted with cochlear implants after 24 months. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Use of transcranial direct current stimulation for the treatment of auditory hallucinations of schizophrenia – a systematic review

    PubMed Central

    Pondé, Pedro H; de Sena, Eduardo P; Camprodon, Joan A; de Araújo, Arão Nogueira; Neto, Mário F; DiBiasi, Melany; Baptista, Abrahão Fontes; Moura, Lidia MVR; Cosmo, Camila

    2017-01-01

    Introduction Auditory hallucinations are defined as experiences of auditory perceptions in the absence of a provoking external stimulus. They are the most prevalent symptoms of schizophrenia with high capacity for chronicity and refractoriness during the course of disease. The transcranial direct current stimulation (tDCS) – a safe, portable, and inexpensive neuromodulation technique – has emerged as a promising treatment for the management of auditory hallucinations. Objective The aim of this study is to analyze the level of evidence in the literature available for the use of tDCS as a treatment for auditory hallucinations in schizophrenia. Methods A systematic review was performed, searching in the main electronic databases including the Cochrane Library and MEDLINE/PubMed. The searches were performed by combining descriptors, applying terms of the Medical Subject Headings (MeSH) of Descriptors of Health Sciences and descriptors contractions. PRISMA protocol was used as a guide and the terms used were the clinical outcomes (“Schizophrenia” OR “Auditory Hallucinations” OR “Auditory Verbal Hallucinations” OR “Psychosis”) searched together (“AND”) with interventions (“transcranial Direct Current Stimulation” OR “tDCS” OR “Brain Polarization”). Results Six randomized controlled trials that evaluated the effects of tDCS on the severity of auditory hallucinations in schizophrenic patients were selected. Analysis of the clinical results of these studies pointed toward incongruence in the information with regard to the therapeutic use of tDCS with a view to reducing the severity of auditory hallucinations in schizophrenia. Only three studies revealed a therapeutic benefit, manifested by reductions in severity and frequency of auditory verbal hallucinations in schizophrenic patients. Conclusion Although tDCS has shown promising results in reducing the severity of auditory hallucinations in schizophrenic patients, this technique cannot yet be used as a therapeutic alternative due to lack of studies with large sample sizes that portray the positive effects that have been described. PMID:28203084

  5. Multisensory and Modality-Specific Influences on Adaptation to Optical Prisms

    PubMed Central

    Calzolari, Elena; Albini, Federica; Bolognini, Nadia; Vallar, Giuseppe

    2017-01-01

    Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target. Visual, auditory, and multisensory – audio-visual – targets in the adaptation phase were used, while participants wore prisms displacing the visual field rightward by 10°. Proprioceptive, visual, visual-proprioceptive, auditory-proprioceptive straight-ahead shifts were measured. Pointing to auditory and to audio-visual targets in the adaptation phase produces proprioceptive, visual-proprioceptive, and auditory-proprioceptive AEs, as the typical visual targets did. This finding reveals that cross-modal plasticity effects involve both the auditory and the visual modality, and their interactions (Experiment 1). Even a shortened PA phase, requiring only 24 pointings to visual and audio-visual targets (Experiment 2), is sufficient to bring about AEs, as compared to the standard 92-pointings procedure. Finally, pointings to auditory targets cause AEs, although PA with a reduced number of pointings (24) to auditory targets brings about smaller AEs, as compared to the 92-pointings procedure (Experiment 3). Together, results from the three experiments extend to the auditory modality the sensorimotor plasticity underlying the typical AEs produced by PA to visual targets. Importantly, PA to auditory targets appears characterized by less accurate pointings and error correction, suggesting that the auditory component of the PA process may be less central to the building up of the AEs, than the sensorimotor pointing activity per se. These findings highlight both the effectiveness of a reduced number of pointings for bringing about AEs, and the possibility of inducing PA with auditory targets, which may be used as a compensatory route in patients with visual deficits. PMID:29213233

  6. The maturation state of the auditory nerve and brainstem in rats exposed to lead acetate and supplemented with ferrous sulfate.

    PubMed

    Zucki, Fernanda; Morata, Thais C; Duarte, Josilene L; Ferreira, Maria Cecília F; Salgado, Manoel H; Alvarenga, Kátia F

    The literature has reported the association between lead and auditory effects, based on clinical and experimental studies. However, there is no consensus regarding the effects of lead in the auditory system, or its correlation with the concentration of the metal in the blood. To investigate the maturation state of the auditory system, specifically the auditory nerve and brainstem, in rats exposed to lead acetate and supplemented with ferrous sulfate. 30 weanling male rats (Rattus norvegicus, Wistar) were distributed into six groups of five animals each and exposed to one of two concentrations of lead acetate (100 or 400mg/L) and supplemented with ferrous sulfate (20mg/kg). The maturation state of the auditory nerve and brainstem was analyzed using Brainstem Auditory Evoked Potential before and after lead exposure. The concentration of lead in blood and brainstem was analyzed using Inductively Coupled Plasma-Mass Spectrometry. We verified that the concentration of Pb in blood and in brainstem presented a high correlation (r=0.951; p<0.0001). Both concentrations of lead acetate affected the maturation state of the auditory system, being the maturation slower in the regions corresponding to portion of the auditory nerve (wave I) and cochlear nuclei (wave II). The ferrous sulfate supplementation reduced significantly the concentration of lead in blood and brainstem for the group exposed to the lowest concentration of lead (100mg/L), but not for the group exposed to the higher concentration (400mg/L). This study indicate that the lead acetate can have deleterious effects on the maturation of the auditory nerve and brainstem (cochlear nucleus region), as detected by the Brainstem Auditory Evoked Potentials, and the ferrous sulphate can partially amend this effect. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. All rights reserved.

  7. Different mechanisms are responsible for dishabituation of electrophysiological auditory responses to a change in acoustic identity than to a change in stimulus location.

    PubMed

    Smulders, Tom V; Jarvis, Erich D

    2013-11-01

    Repeated exposure to an auditory stimulus leads to habituation of the electrophysiological and immediate-early-gene (IEG) expression response in the auditory system. A novel auditory stimulus reinstates this response in a form of dishabituation. This has been interpreted as the start of new memory formation for this novel stimulus. Changes in the location of an otherwise identical auditory stimulus can also dishabituate the IEG expression response. This has been interpreted as an integration of stimulus identity and stimulus location into a single auditory object, encoded in the firing patterns of the auditory system. In this study, we further tested this hypothesis. Using chronic multi-electrode arrays to record multi-unit activity from the auditory system of awake and behaving zebra finches, we found that habituation occurs to repeated exposure to the same song and dishabituation with a novel song, similar to that described in head-fixed, restrained animals. A large proportion of recording sites also showed dishabituation when the same auditory stimulus was moved to a novel location. However, when the song was randomly moved among 8 interleaved locations, habituation occurred independently of the continuous changes in location. In contrast, when 8 different auditory stimuli were interleaved all from the same location, a separate habituation occurred to each stimulus. This result suggests that neuronal memories of the acoustic identity and spatial location are different, and that allocentric location of a stimulus is not encoded as part of the memory for an auditory object, while its acoustic properties are. We speculate that, instead, the dishabituation that occurs with a change from a stable location of a sound is due to the unexpectedness of the location change, and might be due to different underlying mechanisms than the dishabituation and separate habituations to different acoustic stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Efficacy of Individual Computer-Based Auditory Training for People with Hearing Loss: A Systematic Review of the Evidence

    PubMed Central

    Henshaw, Helen; Ferguson, Melanie A.

    2013-01-01

    Background Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss. Objective This systematic review (PROSPERO 2011: CRD42011001406) evaluated the published evidence-base for the efficacy of individual computer-based auditory training to improve speech intelligibility, cognition and communication abilities in adults with hearing loss, with or without hearing aids or cochlear implants. Methods A systematic search of eight databases and key journals identified 229 articles published since 1996, 13 of which met the inclusion criteria. Data were independently extracted and reviewed by the two authors. Study quality was assessed using ten pre-defined scientific and intervention-specific measures. Results Auditory training resulted in improved performance for trained tasks in 9/10 articles that reported on-task outcomes. Although significant generalisation of learning was shown to untrained measures of speech intelligibility (11/13 articles), cognition (1/1 articles) and self-reported hearing abilities (1/2 articles), improvements were small and not robust. Where reported, compliance with computer-based auditory training was high, and retention of learning was shown at post-training follow-ups. Published evidence was of very-low to moderate study quality. Conclusions Our findings demonstrate that published evidence for the efficacy of individual computer-based auditory training for adults with hearing loss is not robust and therefore cannot be reliably used to guide intervention at this time. We identify a need for high-quality evidence to further examine the efficacy of computer-based auditory training for people with hearing loss. PMID:23675431

  9. No auditory experience, no tinnitus: Lessons from subjects with congenital- and acquired single-sided deafness.

    PubMed

    Lee, Sang-Yeon; Nam, Dong Woo; Koo, Ja-Won; De Ridder, Dirk; Vanneste, Sven; Song, Jae-Jin

    2017-10-01

    Recent studies have adopted the Bayesian brain model to explain the generation of tinnitus in subjects with auditory deafferentation. That is, as the human brain works in a Bayesian manner to reduce environmental uncertainty, missing auditory information due to hearing loss may cause auditory phantom percepts, i.e., tinnitus. This type of deafferentation-induced auditory phantom percept should be preceded by auditory experience because the fill-in phenomenon, namely tinnitus, is based upon auditory prediction and the resultant prediction error. For example, a recent animal study observed the absence of tinnitus in cats with congenital single-sided deafness (SSD; Eggermont and Kral, Hear Res 2016). However, no human studies have investigated the presence and characteristics of tinnitus in subjects with congenital SSD. Thus, the present study sought to reveal differences in the generation of tinnitus between subjects with congenital SSD and those with acquired SSD to evaluate the replicability of previous animal studies. This study enrolled 20 subjects with congenital SSD and 44 subjects with acquired SSD and examined the presence and characteristics of tinnitus in the groups. None of the 20 subjects with congenital SSD perceived tinnitus on the affected side, whereas 30 of 44 subjects with acquired SSD experienced tinnitus on the affected side. Additionally, there were significant positive correlations between tinnitus characteristics and the audiometric characteristics of the SSD. In accordance with the findings of the recent animal study, tinnitus was absent in subjects with congenital SSD, but relatively frequent in subjects with acquired SSD, which suggests that the development of tinnitus should be preceded by auditory experience. In other words, subjects with profound congenital peripheral deafferentation do not develop auditory phantom percepts because no auditory predictions are available from the Bayesian brain. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Adult Plasticity in the Subcortical Auditory Pathway of the Maternal Mouse

    PubMed Central

    Miranda, Jason A.; Shepard, Kathryn N.; McClintock, Shannon K.; Liu, Robert C.

    2014-01-01

    Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system – motherhood – is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered. PMID:24992362

  11. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Working memory training in congenitally blind individuals results in an integration of occipital cortex in functional networks.

    PubMed

    Gudi-Mindermann, Helene; Rimmele, Johanna M; Nolte, Guido; Bruns, Patrick; Engel, Andreas K; Röder, Brigitte

    2018-04-12

    The functional relevance of crossmodal activation (e.g. auditory activation of occipital brain regions) in congenitally blind individuals is still not fully understood. The present study tested whether the occipital cortex of blind individuals is integrated into a challenged functional network. A working memory (WM) training over four sessions was implemented. Congenitally blind and matched sighted participants were adaptively trained with an n-back task employing either voices (auditory training) or tactile stimuli (tactile training). In addition, a minimally demanding 1-back task served as an active control condition. Power and functional connectivity of EEG activity evolving during the maintenance period of an auditory 2-back task were analyzed, run prior to and after the WM training. Modality-specific (following auditory training) and modality-independent WM training effects (following both auditory and tactile training) were assessed. Improvements in auditory WM were observed in all groups, and blind and sighted individuals did not differ in training gains. Auditory and tactile training of sighted participants led, relative to the active control group, to an increase in fronto-parietal theta-band power, suggesting a training-induced strengthening of the existing modality-independent WM network. No power effects were observed in the blind. Rather, after auditory training the blind showed a decrease in theta-band connectivity between central, parietal, and occipital electrodes compared to the blind tactile training and active control groups. Furthermore, in the blind auditory training increased beta-band connectivity between fronto-parietal, central and occipital electrodes. In the congenitally blind, these findings suggest a stronger integration of occipital areas into the auditory WM network. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Neural plasticity expressed in central auditory structures with and without tinnitus

    PubMed Central

    Roberts, Larry E.; Bosnyak, Daniel J.; Thompson, David C.

    2012-01-01

    Sensory training therapies for tinnitus are based on the assumption that, notwithstanding neural changes related to tinnitus, auditory training can alter the response properties of neurons in auditory pathways. To assess this assumption, we investigated whether brain changes induced by sensory training in tinnitus sufferers and measured by electroencephalography (EEG) are similar to those induced in age and hearing loss matched individuals without tinnitus trained on the same auditory task. Auditory training was given using a 5 kHz 40-Hz amplitude-modulated (AM) sound that was in the tinnitus frequency region of the tinnitus subjects and enabled extraction of the 40-Hz auditory steady-state response (ASSR) and P2 transient response known to localize to primary and non-primary auditory cortex, respectively. P2 amplitude increased over training sessions equally in participants with tinnitus and in control subjects, suggesting normal remodeling of non-primary auditory regions in tinnitus. However, training-induced changes in the ASSR differed between the tinnitus and control groups. In controls the phase delay between the 40-Hz response and stimulus waveforms reduced by about 10° over training, in agreement with previous results obtained in young normal hearing individuals. However, ASSR phase did not change significantly with training in the tinnitus group, although some participants showed phase shifts resembling controls. On the other hand, ASSR amplitude increased with training in the tinnitus group, whereas in controls this response (which is difficult to remodel in young normal hearing subjects) did not change with training. These results suggest that neural changes related to tinnitus altered how neural plasticity was expressed in the region of primary but not non-primary auditory cortex. Auditory training did not reduce tinnitus loudness although a small effect on the tinnitus spectrum was detected. PMID:22654738

  14. Differential effects of rhythmic auditory stimulation and neurodevelopmental treatment/Bobath on gait patterns in adults with cerebral palsy: a randomized controlled trial.

    PubMed

    Kim, Soo Ji; Kwak, Eunmi E; Park, Eun Sook; Cho, Sung-Rae

    2012-10-01

    To investigate the effects of rhythmic auditory stimulation (RAS) on gait patterns in comparison with changes after neurodevelopmental treatment (NDT/Bobath) in adults with cerebral palsy. A repeated-measures analysis between the pretreatment and posttreatment tests and a comparison study between groups. Human gait analysis laboratory. Twenty-eight cerebral palsy patients with bilateral spasticity participated in this study. The subjects were randomly allocated to either neurodevelopmental treatment (n = 13) or rhythmic auditory stimulation (n = 15). Gait training with rhythmic auditory stimulation or neurodevelopmental treatment was performed three sessions per week for three weeks. Temporal and kinematic data were analysed before and after the intervention. Rhythmic auditory stimulation was provided using a combination of a metronome beat set to the individual's cadence and rhythmic cueing from a live keyboard, while neurodevelopmental treatment was implemented following the traditional method. Temporal data, kinematic parameters and gait deviation index as a measure of overall gait pathology were assessed. Temporal gait measures revealed that rhythmic auditory stimulation significantly increased cadence, walking velocity, stride length, and step length (P < 0.05). Kinematic data demonstrated that anterior tilt of the pelvis and hip flexion during a gait cycle was significantly ameliorated after rhythmic auditory stimulation (P < 0.05). Gait deviation index also showed modest improvement in cerebral palsy patients treated with rhythmic auditory stimulation (P < 0.05). However, neurodevelopmental treatment showed that internal and external rotations of hip joints were significantly improved, whereas rhythmic auditory stimulation showed aggravated maximal internal rotation in the transverse plane (P < 0.05). Gait training with rhythmic auditory stimulation or neurodevelopmental treatment elicited differential effects on gait patterns in adults with cerebral palsy.

  15. Sensory Coding and Sensitivity to Local Estrogens Shift during Critical Period Milestones in the Auditory Cortex of Male Songbirds.

    PubMed

    Vahaba, Daniel M; Macedo-Lima, Matheus; Remage-Healey, Luke

    2017-01-01

    Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor's song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM's established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E 2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches ( Taeniopygia guttata ) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E 2 administration on sensory processing. In sensory-aged subjects, E 2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E 2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E 2 sensitivity that each precisely track a key neural "switch point" from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds.

  16. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    NASA Astrophysics Data System (ADS)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids auditory parsing and functional representation of acoustic objects and was found to be a principal feature of pleasing auditory stimuli.

  17. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    PubMed

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  18. Effects of Auditory Stimuli in the Horizontal Plane on Audiovisual Integration: An Event-Related Potential Study

    PubMed Central

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097

  19. Relative size of auditory pathways in symmetrically and asymmetrically eared owls.

    PubMed

    Gutiérrez-Ibáñez, Cristián; Iwaniuk, Andrew N; Wylie, Douglas R

    2011-01-01

    Owls are highly efficient predators with a specialized auditory system designed to aid in the localization of prey. One of the most unique anatomical features of the owl auditory system is the evolution of vertically asymmetrical ears in some species, which improves their ability to localize the elevational component of a sound stimulus. In the asymmetrically eared barn owl, interaural time differences (ITD) are used to localize sounds in azimuth, whereas interaural level differences (ILD) are used to localize sounds in elevation. These two features are processed independently in two separate neural pathways that converge in the external nucleus of the inferior colliculus to form an auditory map of space. Here, we present a comparison of the relative volume of 11 auditory nuclei in both the ITD and the ILD pathways of 8 species of symmetrically and asymmetrically eared owls in order to investigate evolutionary changes in the auditory pathways in relation to ear asymmetry. Overall, our results indicate that asymmetrically eared owls have much larger auditory nuclei than owls with symmetrical ears. In asymmetrically eared owls we found that both the ITD and ILD pathways are equally enlarged, and other auditory nuclei, not directly involved in binaural comparisons, are also enlarged. We suggest that the hypertrophy of auditory nuclei in asymmetrically eared owls likely reflects both an improved ability to precisely locate sounds in space and an expansion of the hearing range. Additionally, our results suggest that the hypertrophy of nuclei that compute space may have preceded that of the expansion of the hearing range and evolutionary changes in the size of the auditory system occurred independently of phylogeny. Copyright © 2011 S. Karger AG, Basel.

  20. Constructing Noise-Invariant Representations of Sound in the Auditory Pathway

    PubMed Central

    Rabinowitz, Neil C.; Willmore, Ben D. B.; King, Andrew J.; Schnupp, Jan W. H.

    2013-01-01

    Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain. PMID:24265596

Top