Sample records for auditory objects underlying

  1. Auditory memory can be object based.

    PubMed

    Dyson, Benjamin J; Ishfaq, Feraz

    2008-04-01

    Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.

  2. Cortical mechanisms for the segregation and representation of acoustic textures.

    PubMed

    Overath, Tobias; Kumar, Sukhbinder; Stewart, Lauren; von Kriegstein, Katharina; Cusack, Rhodri; Rees, Adrian; Griffiths, Timothy D

    2010-02-10

    Auditory object analysis requires two fundamental perceptual processes: the definition of the boundaries between objects, and the abstraction and maintenance of an object's characteristic features. Although it is intuitive to assume that the detection of the discontinuities at an object's boundaries precedes the subsequent precise representation of the object, the specific underlying cortical mechanisms for segregating and representing auditory objects within the auditory scene are unknown. We investigated the cortical bases of these two processes for one type of auditory object, an "acoustic texture," composed of multiple frequency-modulated ramps. In these stimuli, we independently manipulated the statistical rules governing (1) the frequency-time space within individual textures (comprising ramps with a given spectrotemporal coherence) and (2) the boundaries between textures (adjacent textures with different spectrotemporal coherences). Using functional magnetic resonance imaging, we show mechanisms defining boundaries between textures with different coherences in primary and association auditory cortices, whereas texture coherence is represented only in association cortex. Furthermore, participants' superior detection of boundaries across which texture coherence increased (as opposed to decreased) was reflected in a greater neural response in auditory association cortex at these boundaries. The results suggest a hierarchical mechanism for processing acoustic textures that is relevant to auditory object analysis: boundaries between objects are first detected as a change in statistical rules over frequency-time space, before a representation that corresponds to the characteristics of the perceived object is formed.

  3. Neural dynamics underlying attentional orienting to auditory representations in short-term memory.

    PubMed

    Backer, Kristina C; Binns, Malcolm A; Alain, Claude

    2015-01-21

    Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics. Copyright © 2015 the authors 0270-6474/15/351307-12$15.00/0.

  4. The what, where and how of auditory-object perception.

    PubMed

    Bizley, Jennifer K; Cohen, Yale E

    2013-10-01

    The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.

  5. Seeing with sound? exploring different characteristics of a visual-to-auditory sensory substitution device.

    PubMed

    Brown, David; Macpherson, Tom; Ward, Jamie

    2011-01-01

    Sensory substitution devices convert live visual images into auditory signals, for example with a web camera (to record the images), a computer (to perform the conversion) and headphones (to listen to the sounds). In a series of three experiments, the performance of one such device ('The vOICe') was assessed under various conditions on blindfolded sighted participants. The main task that we used involved identifying and locating objects placed on a table by holding a webcam (like a flashlight) or wearing it on the head (like a miner's light). Identifying objects on a table was easier with a hand-held device, but locating the objects was easier with a head-mounted device. Brightness converted into loudness was less effective than the reverse contrast (dark being loud), suggesting that performance under these conditions (natural indoor lighting, novice users) is related more to the properties of the auditory signal (ie the amount of noise in it) than the cross-modal association between loudness and brightness. Individual differences in musical memory (detecting pitch changes in two sequences of notes) was related to the time taken to identify or recognise objects, but individual differences in self-reported vividness of visual imagery did not reliably predict performance across the experiments. In general, the results suggest that the auditory characteristics of the device may be more important for initial learning than visual associations.

  6. The what, where and how of auditory-object perception

    PubMed Central

    Bizley, Jennifer K.; Cohen, Yale E.

    2014-01-01

    The fundamental perceptual unit in hearing is the ‘auditory object’. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood. PMID:24052177

  7. Plasticity in neuromagnetic cortical responses suggests enhanced auditory object representation

    PubMed Central

    2013-01-01

    Background Auditory perceptual learning persistently modifies neural networks in the central nervous system. Central auditory processing comprises a hierarchy of sound analysis and integration, which transforms an acoustical signal into a meaningful object for perception. Based on latencies and source locations of auditory evoked responses, we investigated which stage of central processing undergoes neuroplastic changes when gaining auditory experience during passive listening and active perceptual training. Young healthy volunteers participated in a five-day training program to identify two pre-voiced versions of the stop-consonant syllable ‘ba’, which is an unusual speech sound to English listeners. Magnetoencephalographic (MEG) brain responses were recorded during two pre-training and one post-training sessions. Underlying cortical sources were localized, and the temporal dynamics of auditory evoked responses were analyzed. Results After both passive listening and active training, the amplitude of the P2m wave with latency of 200 ms increased considerably. By this latency, the integration of stimulus features into an auditory object for further conscious perception is considered to be complete. Therefore the P2m changes were discussed in the light of auditory object representation. Moreover, P2m sources were localized in anterior auditory association cortex, which is part of the antero-ventral pathway for object identification. The amplitude of the earlier N1m wave, which is related to processing of sensory information, did not change over the time course of the study. Conclusion The P2m amplitude increase and its persistence over time constitute a neuroplastic change. The P2m gain likely reflects enhanced object representation after stimulus experience and training, which enables listeners to improve their ability for scrutinizing fine differences in pre-voicing time. Different trajectories of brain and behaviour changes suggest that the preceding effect of a P2m increase relates to brain processes, which are necessary precursors of perceptual learning. Cautious discussion is required when interpreting the finding of a P2 amplitude increase between recordings before and after training and learning. PMID:24314010

  8. A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion

    PubMed Central

    Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-01-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322

  9. Influence of non-contextual auditory stimuli on navigation in a virtual reality context involving executive functions among patients after stroke.

    PubMed

    Cogné, Mélanie; Violleau, Marie-Hélène; Klinger, Evelyne; Joseph, Pierre-Alain

    2018-01-31

    Topographical disorientation is frequent among patients after a stroke and can be well explored with virtual environments (VEs). VEs also allow for the addition of stimuli. A previous study did not find any effect of non-contextual auditory stimuli on navigational performance in the virtual action planning-supermarket (VAP-S) simulating a medium-sized 3D supermarket. However, the perceptual or cognitive load of the sounds used was not high. We investigated how non-contextual auditory stimuli with high load affect navigational performance in the VAP-S for patients who have had a stroke and any correlation between this performance and dysexecutive disorders. Four kinds of stimuli were considered: sounds from living beings, sounds from supermarket objects, beeping sounds and names of other products that were not available in the VAP-S. The condition without auditory stimuli was the control. The Groupe de réflexion pour l'évaluation des fonctions exécutives (GREFEX) battery was used to evaluate executive functions of patients. The study included 40 patients who have had a stroke (n=22 right-hemisphere and n=18 left-hemisphere stroke). Patients' navigational performance was decreased under the 4 conditions with non-contextual auditory stimuli (P<0.05), especially for those with dysexecutive disorders. For the 5 conditions, the lower the performance, the more GREFEX tests were failed. Patients felt significantly disadvantaged by the non-contextual sounds sounds from living beings, sounds from supermarket objects and names of other products as compared with beeping sounds (P<0.01). Patients' verbal recall of the collected objects was significantly lower under the condition with names of other products (P<0.001). Left and right brain-damaged patients did not differ in navigational performance in the VAP-S under the 5 auditory conditions. These non-contextual auditory stimuli could be used in neurorehabilitation paradigms to train patients with dysexecutive disorders to inhibit disruptive stimuli. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  10. Auditory Task Irrelevance: A Basis for Inattentional Deafness

    PubMed Central

    Scheer, Menja; Bülthoff, Heinrich H.; Chuang, Lewis L.

    2018-01-01

    Objective This study investigates the neural basis of inattentional deafness, which could result from task irrelevance in the auditory modality. Background Humans can fail to respond to auditory alarms under high workload situations. This failure, termed inattentional deafness, is often attributed to high workload in the visual modality, which reduces one’s capacity for information processing. Besides this, our capacity for processing auditory information could also be selectively diminished if there is no obvious task relevance in the auditory channel. This could be another contributing factor given the rarity of auditory warnings. Method Forty-eight participants performed a visuomotor tracking task while auditory stimuli were presented: a frequent pure tone, an infrequent pure tone, and infrequent environmental sounds. Participants were required either to respond to the presentation of the infrequent pure tone (auditory task-relevant) or not (auditory task-irrelevant). We recorded and compared the event-related potentials (ERPs) that were generated by environmental sounds, which were always task-irrelevant for both groups. These ERPs served as an index for our participants’ awareness of the task-irrelevant auditory scene. Results Manipulation of auditory task relevance influenced the brain’s response to task-irrelevant environmental sounds. Specifically, the late novelty-P3 to irrelevant environmental sounds, which underlies working memory updating, was found to be selectively enhanced by auditory task relevance independent of visuomotor workload. Conclusion Task irrelevance in the auditory modality selectively reduces our brain’s responses to unexpected and irrelevant sounds regardless of visuomotor workload. Application Presenting relevant auditory information more often could mitigate the risk of inattentional deafness. PMID:29578754

  11. Comparison of Pre-Attentive Auditory Discrimination at Gross and Fine Difference between Auditory Stimuli.

    PubMed

    Sanju, Himanshu Kumar; Kumar, Prawin

    2016-10-01

    Introduction  Mismatch Negativity is a negative component of the event-related potential (ERP) elicited by any discriminable changes in auditory stimulation. Objective  The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method  Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN) with pair of stimuli (pure tones), using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result  Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA) showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve) in both conditions. Conclusion  The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz) and gross (1000 Hz and 1100 Hz) difference in auditory stimuli at a higher level (endogenous) of the auditory system.

  12. Modulation frequency as a cue for auditory speed perception.

    PubMed

    Senna, Irene; Parise, Cesare V; Ernst, Marc O

    2017-07-12

    Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).

  13. Perceptual Plasticity for Auditory Object Recognition

    PubMed Central

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed. PMID:28588524

  14. Online control of reaching and pointing to visual, auditory, and multimodal targets: Effects of target modality and method of determining correction latency.

    PubMed

    Holmes, Nicholas P; Dakwar, Azar R

    2015-12-01

    Movements aimed towards objects occasionally have to be adjusted when the object moves. These online adjustments can be very rapid, occurring in as little as 100ms. More is known about the latency and neural basis of online control of movements to visual than to auditory target objects. We examined the latency of online corrections in reaching-to-point movements to visual and auditory targets that could change side and/or modality at movement onset. Visual or auditory targets were presented on the left or right sides, and participants were instructed to reach and point to them as quickly and as accurately as possible. On half of the trials, the targets changed side at movement onset, and participants had to correct their movements to point to the new target location as quickly as possible. Given different published approaches to measuring the latency for initiating movement corrections, we examined several different methods systematically. What we describe here as the optimal methods involved fitting a straight-line model to the velocity of the correction movement, rather than using a statistical criterion to determine correction onset. In the multimodal experiment, these model-fitting methods produced significantly lower latencies for correcting movements away from the auditory targets than away from the visual targets. Our results confirm that rapid online correction is possible for auditory targets, but further work is required to determine whether the underlying control system for reaching and pointing movements is the same for auditory and visual targets. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study.

    PubMed

    Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence

    2017-09-25

    At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    NASA Astrophysics Data System (ADS)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids auditory parsing and functional representation of acoustic objects and was found to be a principal feature of pleasing auditory stimuli.

  17. Multivariate sensitivity to voice during auditory categorization.

    PubMed

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.

  18. A dual-process account of auditory change detection.

    PubMed

    McAnally, Ken I; Martin, Russell L; Eramudugolla, Ranmalee; Stuart, Geoffrey W; Irvine, Dexter R F; Mattingley, Jason B

    2010-08-01

    Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed objects with those predicted by change-detection models based on signal detection theory (SDT) and high-threshold theory (HTT). Detected changes were not identified as accurately as predicted by models based on either theory, suggesting that some changes are detected by a process that does not support change identification. Undetected changes were identified as accurately as predicted by the HTT model but much less accurately than predicted by the SDT models. The process underlying change detection was investigated further by determining receiver-operating characteristics (ROCs). ROCs did not conform to those predicted by either a SDT or a HTT model but were well modeled by a dual-process that incorporated HTT and SDT components. The dual-process model also accurately predicted the rates at which detected and undetected changes were correctly identified.

  19. Cortical Representations of Speech in a Multitalker Auditory Scene.

    PubMed

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.

  20. Air and Bone Conduction Frequency-specific Auditory Brainstem Response in Children with Agenesis of the External Auditory Canal

    PubMed Central

    Sleifer, Pricila; Didoné, Dayane Domeneghini; Keppeler, Ísis Bicca; Bueno, Claudine Devicari; Riesgo, Rudimar dos Santos

    2017-01-01

    Introduction  The tone-evoked auditory brainstem responses (tone-ABR) enable the differential diagnosis in the evaluation of children until 12 months of age, including those with external and/or middle ear malformations. The use of auditory stimuli with frequency specificity by air and bone conduction allows characterization of hearing profile. Objective  The objective of our study was to compare the results obtained in tone-ABR by air and bone conduction in children until 12 months, with agenesis of the external auditory canal. Method  The study was cross-sectional, observational, individual, and contemporary. We conducted the research with tone-ABR by air and bone conduction in the frequencies of 500 Hz and 2000 Hz in 32 children, 23 boys, from one to 12 months old, with agenesis of the external auditory canal. Results  The tone-ABR thresholds were significantly elevated for air conduction in the frequencies of 500 Hz and 2000 Hz, while the thresholds of bone conduction had normal values in both ears. We found no statistically significant difference between genders and ears for most of the comparisons. Conclusion  The thresholds obtained by bone conduction did not alter the thresholds in children with conductive hearing loss. However, the conductive hearing loss alter all thresholds by air conduction. The tone-ABR by bone conduction is an important tool for assessing cochlear integrity in children with agenesis of the external auditory canal under 12 months. PMID:29018492

  1. Speech Evoked Auditory Brainstem Response in Stuttering

    PubMed Central

    Tahaei, Ali Akbar; Ashayeri, Hassan; Pourbakht, Akram; Kamali, Mohammad

    2014-01-01

    Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS) at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency. PMID:25215262

  2. Different mechanisms are responsible for dishabituation of electrophysiological auditory responses to a change in acoustic identity than to a change in stimulus location.

    PubMed

    Smulders, Tom V; Jarvis, Erich D

    2013-11-01

    Repeated exposure to an auditory stimulus leads to habituation of the electrophysiological and immediate-early-gene (IEG) expression response in the auditory system. A novel auditory stimulus reinstates this response in a form of dishabituation. This has been interpreted as the start of new memory formation for this novel stimulus. Changes in the location of an otherwise identical auditory stimulus can also dishabituate the IEG expression response. This has been interpreted as an integration of stimulus identity and stimulus location into a single auditory object, encoded in the firing patterns of the auditory system. In this study, we further tested this hypothesis. Using chronic multi-electrode arrays to record multi-unit activity from the auditory system of awake and behaving zebra finches, we found that habituation occurs to repeated exposure to the same song and dishabituation with a novel song, similar to that described in head-fixed, restrained animals. A large proportion of recording sites also showed dishabituation when the same auditory stimulus was moved to a novel location. However, when the song was randomly moved among 8 interleaved locations, habituation occurred independently of the continuous changes in location. In contrast, when 8 different auditory stimuli were interleaved all from the same location, a separate habituation occurred to each stimulus. This result suggests that neuronal memories of the acoustic identity and spatial location are different, and that allocentric location of a stimulus is not encoded as part of the memory for an auditory object, while its acoustic properties are. We speculate that, instead, the dishabituation that occurs with a change from a stable location of a sound is due to the unexpectedness of the location change, and might be due to different underlying mechanisms than the dishabituation and separate habituations to different acoustic stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. A Corticothalamic Circuit Model for Sound Identification in Complex Scenes

    PubMed Central

    Otazu, Gonzalo H.; Leibold, Christian

    2011-01-01

    The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668

  4. Time-compressed spoken word primes crossmodally enhance processing of semantically congruent visual targets.

    PubMed

    Mahr, Angela; Wentura, Dirk

    2014-02-01

    Findings from three experiments support the conclusion that auditory primes facilitate the processing of related targets. In Experiments 1 and 2, we employed a crossmodal Stroop color identification task with auditory color words (as primes) and visual color patches (as targets). Responses were faster for congruent priming, in comparison to neutral or incongruent priming. This effect also emerged for different levels of time compression of the auditory primes (to 30 % and 10 % of the original length; i.e., 120 and 40 ms) and turned out to be even more pronounced under high-perceptual-load conditions (Exps. 1 and 2). In Experiment 3, target-present or -absent decisions for brief target displays had to be made, thereby ruling out response-priming processes as a cause of the congruency effects. Nevertheless, target detection (d') was increased by congruent primes (30 % compression) in comparison to incongruent or neutral primes. Our results suggest semantic object-based auditory-visual interactions, which rapidly increase the denoted target object's salience. This would apply, in particular, to complex visual scenes.

  5. A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.

    PubMed

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2018-06-01

    The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants <18, nonhuman participants, and article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as neurophysiological substrates for auditory prediction. Tinnitus has been modeled as an auditory object which may demonstrate incomplete processing during auditory scene analysis resulting in tinnitus salience and therefore difficulty in habituation. Within the electrophysiological domain, there is currently mixed evidence regarding oscillatory band changes in tinnitus. There are theoretical proposals for a relationship between prediction error and tinnitus but few published empirical studies. American Academy of Audiology.

  6. An assessment of auditory-guided locomotion in an obstacle circumvention task.

    PubMed

    Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina

    2016-06-01

    This study investigated how effectively audition can be used to guide navigation around an obstacle. Ten blindfolded normally sighted participants navigated around a 0.6 × 2 m obstacle while producing self-generated mouth click sounds. Objective movement performance was measured using a Vicon motion capture system. Performance with full vision without generating sound was used as a baseline for comparison. The obstacle's location was varied randomly from trial to trial: it was either straight ahead or 25 cm to the left or right relative to the participant. Although audition provided sufficient information to detect the obstacle and guide participants around it without collision in the majority of trials, buffer space (clearance between the shoulder and obstacle), overall movement times, and number of velocity corrections were significantly (p < 0.05) greater with auditory guidance than visual guidance. Collisions sometime occurred under auditory guidance, suggesting that audition did not always provide an accurate estimate of the space between the participant and obstacle. Unlike visual guidance, participants did not always walk around the side that afforded the most space during auditory guidance. Mean buffer space was 1.8 times higher under auditory than under visual guidance. Results suggest that sound can be used to generate buffer space when vision is unavailable, allowing navigation around an obstacle without collision in the majority of trials.

  7. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  8. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    PubMed

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  9. "Change deafness" arising from inter-feature masking within a single auditory object.

    PubMed

    Barascud, Nicolas; Griffiths, Timothy D; McAlpine, David; Chait, Maria

    2014-03-01

    Our ability to detect prominent changes in complex acoustic scenes depends not only on the ear's sensitivity but also on the capacity of the brain to process competing incoming information. Here, employing a combination of psychophysics and magnetoencephalography (MEG), we investigate listeners' sensitivity in situations when two features belonging to the same auditory object change in close succession. The auditory object under investigation is a sequence of tone pips characterized by a regularly repeating frequency pattern. Signals consisted of an initial, regularly alternating sequence of three short (60 msec) pure tone pips (in the form ABCABC…) followed by a long pure tone with a frequency that is either expected based on the on-going regular pattern ("LONG expected"-i.e., "LONG-expected") or constitutes a pattern violation ("LONG-unexpected"). The change in LONG-expected is manifest as a change in duration (when the long pure tone exceeds the established duration of a tone pip), whereas the change in LONG-unexpected is manifest as a change in both the frequency pattern and a change in the duration. Our results reveal a form of "change deafness," in that although changes in both the frequency pattern and the expected duration appear to be processed effectively by the auditory system-cortical signatures of both changes are evident in the MEG data-listeners often fail to detect changes in the frequency pattern when that change is closely followed by a change in duration. By systematically manipulating the properties of the changing features and measuring behavioral and MEG responses, we demonstrate that feature changes within the same auditory object, which occur close together in time, appear to compete for perceptual resources.

  10. Feature assignment in perception of auditory figure.

    PubMed

    Gregg, Melissa K; Samuel, Arthur G

    2012-08-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory "objects" (relatively punctate events, such as a dog's bark) and auditory "streams" (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial 2 sounds-an object (a vowel) and a stream (a series of tones)-were presented with 1 target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to 1 of the 2 sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to 3. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed. (c) 2012 APA, all rights reserved.

  11. Auditory salience using natural soundscapes.

    PubMed

    Huang, Nicholas; Elhilali, Mounya

    2017-03-01

    Salience describes the phenomenon by which an object stands out from a scene. While its underlying processes are extensively studied in vision, mechanisms of auditory salience remain largely unknown. Previous studies have used well-controlled auditory scenes to shed light on some of the acoustic attributes that drive the salience of sound events. Unfortunately, the use of constrained stimuli in addition to a lack of well-established benchmarks of salience judgments hampers the development of comprehensive theories of sensory-driven auditory attention. The present study explores auditory salience in a set of dynamic natural scenes. A behavioral measure of salience is collected by having human volunteers listen to two concurrent scenes and indicate continuously which one attracts their attention. By using natural scenes, the study takes a data-driven rather than experimenter-driven approach to exploring the parameters of auditory salience. The findings indicate that the space of auditory salience is multidimensional (spanning loudness, pitch, spectral shape, as well as other acoustic attributes), nonlinear and highly context-dependent. Importantly, the results indicate that contextual information about the entire scene over both short and long scales needs to be considered in order to properly account for perceptual judgments of salience.

  12. A Dual-Process Account of Auditory Change Detection

    ERIC Educational Resources Information Center

    McAnally, Ken I.; Martin, Russell L.; Eramudugolla, Ranmalee; Stuart, Geoffrey W.; Irvine, Dexter R. F.; Mattingley, Jason B.

    2010-01-01

    Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed…

  13. Selective impairment of auditory selective attention under concurrent cognitive load.

    PubMed

    Dittrich, Kerstin; Stahl, Christoph

    2012-06-01

    Load theory predicts that concurrent cognitive load impairs selective attention. For visual stimuli, it has been shown that this impairment can be selective: Distraction was specifically increased when the stimulus material used in the cognitive load task matches that of the selective attention task. Here, we report four experiments that demonstrate such selective load effects for auditory selective attention. The effect of two different cognitive load tasks on two different auditory Stroop tasks was examined, and selective load effects were observed: Interference in a nonverbal-auditory Stroop task was increased under concurrent nonverbal-auditory cognitive load (compared with a no-load condition), but not under concurrent verbal-auditory cognitive load. By contrast, interference in a verbal-auditory Stroop task was increased under concurrent verbal-auditory cognitive load but not under nonverbal-auditory cognitive load. This double-dissociation pattern suggests the existence of different and separable verbal and nonverbal processing resources in the auditory domain.

  14. Attending to auditory memory.

    PubMed

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Sound localization by echolocating bats

    NASA Astrophysics Data System (ADS)

    Aytekin, Murat

    Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.

  16. Neural correlates of auditory recognition memory in the primate dorsal temporal pole

    PubMed Central

    Ng, Chi-Wing; Plakke, Bethany

    2013-01-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects. PMID:24198324

  17. Incorporating Auditory Models in Speech/Audio Applications

    NASA Astrophysics Data System (ADS)

    Krishnamoorthi, Harish

    2011-12-01

    Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.

  18. Multisensory object perception in infancy: 4-month-olds perceive a mistuned harmonic as a separate auditory and visual object

    PubMed Central

    A. Smith, Nicholas; A. Folland, Nicholas; Martinez, Diana M.; Trainor, Laurel J.

    2017-01-01

    Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain et al., 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. PMID:28346869

  19. Central Auditory Maturation and Behavioral Outcome in Children with Auditory Neuropathy Spectrum Disorder who Use Cochlear Implants

    PubMed Central

    Cardon, Garrett; Sharma, Anu

    2013-01-01

    Objective We examined cortical auditory development and behavioral outcomes in children with ANSD fitted with cochlear implants (CI). Design Cortical maturation, measured by P1 cortical auditory evoked potential (CAEP) latency, was regressed against scores on the Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS). Implantation age was also considered in relation to CAEP findings. Study Sample Cross-sectional and longitudinal samples of 24 and 11 children, respectively, with ANSD fitted with CIs. Result P1 CAEP responses were present in all children after implantation, though previous findings suggest that only 50-75% of ANSD children with hearing aids show CAEP responses. P1 CAEP latency was significantly correlated with participants' IT-MAIS scores. Furthermore, more children implanted before age two years showed normal P1 latencies, while those implanted later mainly showed delayed latencies. Longitudinal analysis revealed that most children showed normal or improved cortical maturation after implantation. Conclusion Cochlear implantation resulted in measureable cortical auditory development for all children with ANSD. Children fitted with CIs under age two years were more likely to show age-appropriate CAEP responses within 6 months after implantation, suggesting a possible sensitive period for cortical auditory development in ANSD. That CAEP responses were correlated with behavioral outcome highlights their clinical decision-making utility. PMID:23819618

  20. The impact of visual gaze direction on auditory object tracking.

    PubMed

    Pomper, Ulrich; Chait, Maria

    2017-07-05

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.

  1. Auditory-visual object recognition time suggests specific processing for animal sounds.

    PubMed

    Suied, Clara; Viaud-Delmon, Isabelle

    2009-01-01

    Recognizing an object requires binding together several cues, which may be distributed across different sensory modalities, and ignoring competing information originating from other objects. In addition, knowledge of the semantic category of an object is fundamental to determine how we should react to it. Here we investigate the role of semantic categories in the processing of auditory-visual objects. We used an auditory-visual object-recognition task (go/no-go paradigm). We compared recognition times for two categories: a biologically relevant one (animals) and a non-biologically relevant one (means of transport). Participants were asked to react as fast as possible to target objects, presented in the visual and/or the auditory modality, and to withhold their response for distractor objects. A first main finding was that, when participants were presented with unimodal or bimodal congruent stimuli (an image and a sound from the same object), similar reaction times were observed for all object categories. Thus, there was no advantage in the speed of recognition for biologically relevant compared to non-biologically relevant objects. A second finding was that, in the presence of a biologically relevant auditory distractor, the processing of a target object was slowed down, whether or not it was itself biologically relevant. It seems impossible to effectively ignore an animal sound, even when it is irrelevant to the task. These results suggest a specific and mandatory processing of animal sounds, possibly due to phylogenetic memory and consistent with the idea that hearing is particularly efficient as an alerting sense. They also highlight the importance of taking into account the auditory modality when investigating the way object concepts of biologically relevant categories are stored and retrieved.

  2. Non-visual spatial tasks reveal increased interactions with stance postural control.

    PubMed

    Woollacott, Marjorie; Vander Velde, Timothy

    2008-05-07

    The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.

  3. Early multisensory interactions affect the competition among multiple visual objects.

    PubMed

    Van der Burg, Erik; Talsma, Durk; Olivers, Christian N L; Hickey, Clayton; Theeuwes, Jan

    2011-04-01

    In dynamic cluttered environments, audition and vision may benefit from each other in determining what deserves further attention and what does not. We investigated the underlying neural mechanisms responsible for attentional guidance by audiovisual stimuli in such an environment. Event-related potentials (ERPs) were measured during visual search through dynamic displays consisting of line elements that randomly changed orientation. Search accuracy improved when a target orientation change was synchronized with an auditory signal as compared to when the auditory signal was absent or synchronized with a distractor orientation change. The ERP data show that behavioral benefits were related to an early multisensory interaction over left parieto-occipital cortex (50-60 ms post-stimulus onset), which was followed by an early positive modulation (80-100 ms) over occipital and temporal areas contralateral to the audiovisual event, an enhanced N2pc (210-250 ms), and a contralateral negative slow wave (CNSW). The early multisensory interaction was correlated with behavioral search benefits, indicating that participants with a strong multisensory interaction benefited the most from the synchronized auditory signal. We suggest that an auditory signal enhances the neural response to a synchronized visual event, which increases the chances of selection in a multiple object environment. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Human auditory evoked potentials in the assessment of brain function during major cardiovascular surgery.

    PubMed

    Rodriguez, Rosendo A

    2004-06-01

    Focal neurologic and intellectual deficits or memory problems are relatively frequent after cardiac surgery. These complications have been associated with cerebral hypoperfusion, embolization, and inflammation that occur during or after surgery. Auditory evoked potentials, a neurophysiologic technique that evaluates the function of neural structures from the auditory nerve to the cortex, provide useful information about the functional status of the brain during major cardiovascular procedures. Skepticism regarding the presence of artifacts or difficulty in their interpretation has outweighed considerations of its potential utility and noninvasiveness. This paper reviews the evidence of their potential applications in several aspects of the management of cardiac surgery patients. The sensitivity of auditory evoked potentials to the effects of changes in brain temperature makes them useful for monitoring cerebral hypothermia and rewarming during cardiopulmonary bypass. The close relationship between evoked potential waveforms and specific anatomic structures facilitates the assessment of the functional integrity of the central nervous system in cardiac surgery patients. This feature may also be relevant in the management of critical patients under sedation and coma or in the evaluation of their prognosis during critical care. Their objectivity, reproducibility, and relative insensitivity to learning effects make auditory evoked potentials attractive for the cognitive assessment of cardiac surgery patients. From a clinical perspective, auditory evoked potentials represent an additional window for the study of underlying cerebral processes in healthy and diseased patients. From a research standpoint, this technology offers opportunities for a better understanding of the particular cerebral deficits associated with patients who are undergoing major cardiovascular procedures.

  5. Memory as embodiment: The case of modality and serial short-term memory.

    PubMed

    Macken, Bill; Taylor, John C; Kozlov, Michail D; Hughes, Robert W; Jones, Dylan M

    2016-10-01

    Classical explanations for the modality effect-superior short-term serial recall of auditory compared to visual sequences-typically recur to privileged processing of information derived from auditory sources. Here we critically appraise such accounts, and re-evaluate the nature of the canonical empirical phenomena that have motivated them. Three experiments show that the standard account of modality in memory is untenable, since auditory superiority in recency is often accompanied by visual superiority in mid-list serial positions. We explain this simultaneous auditory and visual superiority by reference to the way in which perceptual objects are formed in the two modalities and how those objects are mapped to speech motor forms to support sequence maintenance and reproduction. Specifically, stronger obligatory object formation operating in the standard auditory form of sequence presentation compared to that for visual sequences leads both to enhanced addressability of information at the object boundaries and reduced addressability for that in the interior. Because standard visual presentation does not lead to such object formation, such sequences do not show the boundary advantage observed for auditory presentation, but neither do they suffer loss of addressability associated with object information, thereby affording more ready mapping of that information into a rehearsal cohort to support recall. We show that a range of factors that impede this perceptual-motor mapping eliminate visual superiority while leaving auditory superiority unaffected. We make a general case for viewing short-term memory as an embodied, perceptual-motor process. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Selective Attention to Auditory Memory Neurally Enhances Perceptual Precision.

    PubMed

    Lim, Sung-Joo; Wöstmann, Malte; Obleser, Jonas

    2015-12-09

    Selective attention to a task-relevant stimulus facilitates encoding of that stimulus into a working memory representation. It is less clear whether selective attention also improves the precision of a stimulus already represented in memory. Here, we investigate the behavioral and neural dynamics of selective attention to representations in auditory working memory (i.e., auditory objects) using psychophysical modeling and model-based analysis of electroencephalographic signals. Human listeners performed a syllable pitch discrimination task where two syllables served as to-be-encoded auditory objects. Valid (vs neutral) retroactive cues were presented during retention to allow listeners to selectively attend to the to-be-probed auditory object in memory. Behaviorally, listeners represented auditory objects in memory more precisely (expressed by steeper slopes of a psychometric curve) and made faster perceptual decisions when valid compared to neutral retrocues were presented. Neurally, valid compared to neutral retrocues elicited a larger frontocentral sustained negativity in the evoked potential as well as enhanced parietal alpha/low-beta oscillatory power (9-18 Hz) during memory retention. Critically, individual magnitudes of alpha oscillatory power (7-11 Hz) modulation predicted the degree to which valid retrocues benefitted individuals' behavior. Our results indicate that selective attention to a specific object in auditory memory does benefit human performance not by simply reducing memory load, but by actively engaging complementary neural resources to sharpen the precision of the task-relevant object in memory. Can selective attention improve the representational precision with which objects are held in memory? And if so, what are the neural mechanisms that support such improvement? These issues have been rarely examined within the auditory modality, in which acoustic signals change and vanish on a milliseconds time scale. Introducing a new auditory memory paradigm and using model-based electroencephalography analyses in humans, we thus bridge this gap and reveal behavioral and neural signatures of increased, attention-mediated working memory precision. We further show that the extent of alpha power modulation predicts the degree to which individuals' memory performance benefits from selective attention. Copyright © 2015 the authors 0270-6474/15/3516094-11$15.00/0.

  7. What's what in auditory cortices?

    PubMed

    Retsa, Chrysa; Matusz, Pawel J; Schnupp, Jan W H; Murray, Micah M

    2018-08-01

    Distinct anatomical and functional pathways are postulated for analysing a sound's object-related ('what') and space-related ('where') information. It remains unresolved to which extent distinct or overlapping neural resources subserve specific object-related dimensions (i.e. who is speaking and what is being said can both be derived from the same acoustic input). To address this issue, we recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to their pitch, speaker identity, uttered syllable ('what' dimensions) or their location ('where'). Sound acoustics were held constant across blocks; the only manipulation involved the sound dimension that participants had to attend to. The task-relevant dimension was varied across blocks. AEPs from healthy participants were analysed within an electrical neuroimaging framework to differentiate modulations in response strength from modulations in response topography; the latter of which forcibly follow from changes in the configuration of underlying sources. There were no behavioural differences in discrimination of sounds across the 4 feature dimensions. As early as 90ms post-stimulus onset, AEP topographies differed across 'what' conditions, supporting a functional sub-segregation within the auditory 'what' pathway. This study characterises the spatio-temporal dynamics of segregated, yet parallel, processing of multiple sound object-related feature dimensions when selective attention is directed to them. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation

    PubMed Central

    Scarfe, Amy C.; Moore, Brian C. J.; Pardhan, Shahina

    2017-01-01

    Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound. PMID:28407000

  9. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation.

    PubMed

    Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina

    2017-01-01

    Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.

  10. The Tactile Continuity Illusion

    ERIC Educational Resources Information Center

    Kitagawa, Norimichi; Igarashi, Yuka; Kashino, Makio

    2009-01-01

    We can perceive the continuity of an object or event by integrating spatially/temporally discrete sensory inputs. The mechanism underlying this perception of continuity has intrigued many researchers and has been well documented in both the visual and auditory modalities. The present study shows for the first time to our knowledge that an illusion…

  11. A Quality Improvement Study on Avoidable Stressors and Countermeasures Affecting Surgical Motor Performance and Learning

    PubMed Central

    Conrad, Claudius; Konuk, Yusuf; Werner, Paul D.; Cao, Caroline G.; Warshaw, Andrew L.; Rattner, David W.; Stangenberg, Lars; Ott, Harald C.; Jones, Daniel B.; Miller, Diane L; Gee, Denise W.

    2012-01-01

    OBJECTIVE To explore how the two most important components of surgical performance - speed and accuracy - are influenced by different forms of stress and what the impact of music on these factors is. SUMMARY BACKGROUND DATA Based on a recently published pilot study on surgical experts, we designed an experiment examining the effects of auditory stress, mental stress, and music on surgical performance and learning, and then correlated the data psychometric measures to the role of music in a novice surgeon’s life. METHODS 31 surgeons were recruited for a crossover study. Surgeons were randomized to four simple standardized tasks to be performed on the Surgical SIM VR laparoscopic simulator, allowing exact tracking of speed and accuracy. Tasks were performed under a variety of conditions, including silence, dichotic music (auditory stress), defined classical music (auditory relaxation), and mental loading (mental arithmetic tasks). Tasks were performed twice to test for memory consolidation and to accommodate for baseline variability. Performance was correlated to the Brief Musical Experience Questionnaire (MEQ). RESULTS Mental loading influences performance with respect to accuracy, speed, and recall more negatively than does auditory stress. Defined classical music might lead to minimally worse performance initially, but leads to significantly improved memory consolidation. Furthermore, psychologic testing of the volunteers suggests that surgeons with greater musical commitment, measured by the MEQ, perform worse under the mental loading condition. CONCLUSION Mental distraction and auditory stress negatively affect specific components of surgical learning and performance. If used appropriately, classical music may positively affect surgical memory consolidation. It also may be possible to predict surgeons’ performance and learning under stress through psychological tests on the role of music in a surgeon’s life. Further investigation is necessary to determine the cognitive processes behind these correlations. PMID:22584632

  12. Increased Early Processing of Task-Irrelevant Auditory Stimuli in Older Adults

    PubMed Central

    Tusch, Erich S.; Alperin, Brittany R.; Holcomb, Phillip J.; Daffner, Kirk R.

    2016-01-01

    The inhibitory deficit hypothesis of cognitive aging posits that older adults’ inability to adequately suppress processing of irrelevant information is a major source of cognitive decline. Prior research has demonstrated that in response to task-irrelevant auditory stimuli there is an age-associated increase in the amplitude of the N1 wave, an ERP marker of early perceptual processing. Here, we tested predictions derived from the inhibitory deficit hypothesis that the age-related increase in N1 would be 1) observed under an auditory-ignore, but not auditory-attend condition, 2) attenuated in individuals with high executive capacity (EC), and 3) augmented by increasing cognitive load of the primary visual task. ERPs were measured in 114 well-matched young, middle-aged, young-old, and old-old adults, designated as having high or average EC based on neuropsychological testing. Under the auditory-ignore (visual-attend) task, participants ignored auditory stimuli and responded to rare target letters under low and high load. Under the auditory-attend task, participants ignored visual stimuli and responded to rare target tones. Results confirmed an age-associated increase in N1 amplitude to auditory stimuli under the auditory-ignore but not auditory-attend task. Contrary to predictions, EC did not modulate the N1 response. The load effect was the opposite of expectation: the N1 to task-irrelevant auditory events was smaller under high load. Finally, older adults did not simply fail to suppress the N1 to auditory stimuli in the task-irrelevant modality; they generated a larger response than to identical stimuli in the task-relevant modality. In summary, several of the study’s findings do not fit the inhibitory-deficit hypothesis of cognitive aging, which may need to be refined or supplemented by alternative accounts. PMID:27806081

  13. Neural mechanisms underlying auditory feedback control of speech

    PubMed Central

    Reilly, Kevin J.; Guenther, Frank H.

    2013-01-01

    The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech, and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 135 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech. PMID:18035557

  14. Cortical Auditory Evoked Potentials with Simple (Tone Burst) and Complex (Speech) Stimuli in Children with Cochlear Implant

    PubMed Central

    Martins, Kelly Vasconcelos Chaves; Gil, Daniela

    2017-01-01

    Introduction  The registry of the component P1 of the cortical auditory evoked potential has been widely used to analyze the behavior of auditory pathways in response to cochlear implant stimulation. Objective  To determine the influence of aural rehabilitation in the parameters of latency and amplitude of the P1 cortical auditory evoked potential component elicited by simple auditory stimuli (tone burst) and complex stimuli (speech) in children with cochlear implants. Method  The study included six individuals of both genders aged 5 to 10 years old who have been cochlear implant users for at least 12 months, and who attended auditory rehabilitation with an aural rehabilitation therapy approach. Participants were submitted to research of the cortical auditory evoked potential at the beginning of the study and after 3 months of aural rehabilitation. To elicit the responses, simple stimuli (tone burst) and complex stimuli (speech) were used and presented in free field at 70 dB HL. The results were statistically analyzed, and both evaluations were compared. Results  There was no significant difference between the type of eliciting stimulus of the cortical auditory evoked potential for the latency and the amplitude of P1. There was a statistically significant difference in the P1 latency between the evaluations for both stimuli, with reduction of the latency in the second evaluation after 3 months of auditory rehabilitation. There was no statistically significant difference regarding the amplitude of P1 under the two types of stimuli or in the two evaluations. Conclusion  A decrease in latency of the P1 component elicited by both simple and complex stimuli was observed within a three-month interval in children with cochlear implant undergoing aural rehabilitation. PMID:29018498

  15. Emergence of neural encoding of auditory objects while listening to competing speakers

    PubMed Central

    Ding, Nai; Simon, Jonathan Z.

    2012-01-01

    A visual scene is perceived in terms of visual objects. Similar ideas have been proposed for the analogous case of auditory scene analysis, although their hypothesized neural underpinnings have not yet been established. Here, we address this question by recording from subjects selectively listening to one of two competing speakers, either of different or the same sex, using magnetoencephalography. Individual neural representations are seen for the speech of the two speakers, with each being selectively phase locked to the rhythm of the corresponding speech stream and from which can be exclusively reconstructed the temporal envelope of that speech stream. The neural representation of the attended speech dominates responses (with latency near 100 ms) in posterior auditory cortex. Furthermore, when the intensity of the attended and background speakers is separately varied over an 8-dB range, the neural representation of the attended speech adapts only to the intensity of that speaker but not to the intensity of the background speaker, suggesting an object-level intensity gain control. In summary, these results indicate that concurrent auditory objects, even if spectrotemporally overlapping and not resolvable at the auditory periphery, are neurally encoded individually in auditory cortex and emerge as fundamental representational units for top-down attentional modulation and bottom-up neural adaptation. PMID:22753470

  16. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise

    PubMed Central

    White-Schwoch, Travis; Davies, Evan C.; Thompson, Elaine C.; Carr, Kali Woodruff; Nicol, Trent; Bradlow, Ann R.; Kraus, Nina

    2015-01-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But learning rarely occurs under ideal listening conditions—children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3–5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features—even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response properties in this age group. These normative metrics may be useful clinically to evaluate auditory processing difficulties during early childhood. PMID:26113025

  17. Age-Related Deficits in Auditory Confrontation Naming

    PubMed Central

    Hanna-Pladdy, Brenda; Choi, Hyun

    2015-01-01

    The naming of manipulable objects in older and younger adults was evaluated across auditory, visual, and multisensory conditions. Older adults were less accurate and slower in naming across conditions, and all subjects were more impaired and slower to name action sounds than pictures or audiovisual combinations. Moreover, there was a sensory by age group interaction, revealing lower accuracy and increased latencies in auditory naming for older adults unrelated to hearing insensitivity but modest improvement to multisensory cues. These findings support age-related deficits in object action naming and suggest that auditory confrontation naming may be more sensitive than visual naming. PMID:20677880

  18. A single-band envelope cue as a supplement to speechreading of segmentals: a comparison of auditory versus tactual presentation.

    PubMed

    Bratakos, M S; Reed, C M; Delhorne, L A; Denesvich, G

    2001-06-01

    The objective of this study was to compare the effects of a single-band envelope cue as a supplement to speechreading of segmentals and sentences when presented through either the auditory or tactual modality. The supplementary signal, which consisted of a 200-Hz carrier amplitude-modulated by the envelope of an octave band of speech centered at 500 Hz, was presented through a high-performance single-channel vibrator for tactual stimulation or through headphones for auditory stimulation. Normal-hearing subjects were trained and tested on the identification of a set of 16 medial vowels in /b/-V-/d/ context and a set of 24 initial consonants in C-/a/-C context under five conditions: speechreading alone (S), auditory supplement alone (A), tactual supplement alone (T), speechreading combined with the auditory supplement (S+A), and speechreading combined with the tactual supplement (S+T). Performance on various speech features was examined to determine the contribution of different features toward improvements under the aided conditions for each modality. Performance on the combined conditions (S+A and S+T) was compared with predictions generated from a quantitative model of multi-modal performance. To explore the relationship between benefits for segmentals and for connected speech within the same subjects, sentence reception was also examined for the three conditions of S, S+A, and S+T. For segmentals, performance generally followed the pattern of T < A < S < S+T < S+A. Significant improvements to speechreading were observed with both the tactual and auditory supplements for consonants (10 and 23 percentage-point improvements, respectively), but only with the auditory supplement for vowels (a 10 percentage-point improvement). The results of the feature analyses indicated that improvements to speechreading arose primarily from improved performance on the features low and tense for vowels and on the features voicing, nasality, and plosion for consonants. These improvements were greater for auditory relative to tactual presentation. When predicted percent-correct scores for the multi-modal conditions were compared with observed scores, the predicted values always exceeded observed values and the predictions were somewhat more accurate for the S+A than for the S+T conditions. For sentences, significant improvements to speechreading were observed with both the auditory and tactual supplements for high-context materials but again only with the auditory supplement for low-context materials. The tactual supplement provided a relative gain to speechreading of roughly 25% for all materials except low-context sentences (where gain was only 10%), whereas the auditory supplement provided relative gains of roughly 50% (for vowels, consonants, and low-context sentences) to 75% (for high-context sentences). The envelope cue provides a significant benefit to the speechreading of consonant segments when presented through either the auditory or tactual modality and of vowel segments through audition only. These benefits were found to be related to the reception of the same types of features under both modalities (voicing, manner, and plosion for consonants and low and tense for vowels); however, benefits were larger for auditory compared with tactual presentation. The benefits observed for segmentals appear to carry over into benefits for sentence reception under both modalities.

  19. Brain correlates of the orientation of auditory spatial attention onto speaker location in a "cocktail-party" situation.

    PubMed

    Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan

    2016-10-01

    Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.

  20. Misperception of exocentric directions in auditory space

    PubMed Central

    Arthur, Joeanna C.; Philbeck, John W.; Sargent, Jesse; Dopkins, Stephen

    2008-01-01

    Previous studies have demonstrated large errors (over 30°) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer’s location; e.g., Philbeck et al., in press). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20° to 160° azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to −19° for visual targets at 160°). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space. PMID:18555205

  1. Multisensory guidance of orienting behavior.

    PubMed

    Maier, Joost X; Groh, Jennifer M

    2009-12-01

    We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.

  2. Performance of Cerebral Palsied Children under Conditions of Reduced Auditory Input on Selected Intellectual, Cognitive and Perceptual Tasks.

    ERIC Educational Resources Information Center

    Fassler, Joan

    The study investigated the task performance of cerebral palsied children under conditions of reduced auditory input and under normal auditory conditions. A non-cerebral palsied group was studied in a similar manner. Results indicated that cerebral palsied children showed some positive change in performance, under conditions of reduced auditory…

  3. [Analysis of the speech discrimination scores of patients with congenital unilateral microtia and external auditory canal atresia in noise].

    PubMed

    Zhang, Y; Li, D D; Chen, X W

    2017-06-20

    Objective: Case-control study analysis of the speech discrimination of unilateral microtia and external auditory canal atresia patients with normal hearing subjects in quiet and noisy environment. To understand the speech recognition results of patients with unilateral external auditory canal atresia and provide scientific basis for clinical early intervention. Method: Twenty patients with unilateral congenital microtia malformation combined external auditory canal atresia, 20 age matched normal subjects as control group. All subjects used Mandarin speech audiometry material, to test the speech discrimination scores (SDS) in quiet and noisy environment in sound field. Result: There's no significant difference of speech discrimination scores under the condition of quiet between two groups. There's a statistically significant difference when the speech signal in the affected side and noise in the nomalside (single syllable, double syllable, statements; S/N=0 and S/N=-10) ( P <0.05). There's no significant difference of speech discrimination scores when the speech signal in the nomalside and noise in the affected side. There's a statistically significant difference in condition of the signal and noise in the same side when used one-syllable word recognition (S/N=0 and S/N=-5) ( P <0.05), while double syllable word and statement has no statistically significant difference ( P >0.05). Conclusion: The speech discrimination scores of unilateral congenital microtia malformation patients with external auditory canal atresia under the condition of noise is lower than the normal subjects. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.

  4. Crosscheck Principle in Pediatric Audiology Today: A 40-Year Perspective

    PubMed Central

    2016-01-01

    The crosscheck principle is just as important in pediatric audiology as it was when first described 40 years ago. That is, no auditory test result should be accepted and used in the diagnosis of hearing loss until it is confirmed or crosschecked by one or more independent measures. Exclusive reliance on only one or two tests, even objective auditory measures, may result in a auditory diagnosis that is not clear or perhaps incorrect. On the other hand, close and careful analysis of findings for a test battery consisting of objective procedures and behavioral tests whenever feasible usually leads to prompt and accurate diagnosis of auditory dysfunction. This paper provides a concise review of the crosscheck principle from its introduction to its clinical application today. The review concludes with a description of a modern test battery for pediatric hearing assessment that supplements traditional behavioral tests with a variety of independent objective procedures including aural immittance measures, otoacoustic emissions, and auditory evoked responses. PMID:27626077

  5. Crosscheck Principle in Pediatric Audiology Today: A 40-Year Perspective.

    PubMed

    Hall, James W

    2016-09-01

    The crosscheck principle is just as important in pediatric audiology as it was when first described 40 years ago. That is, no auditory test result should be accepted and used in the diagnosis of hearing loss until it is confirmed or crosschecked by one or more independent measures. Exclusive reliance on only one or two tests, even objective auditory measures, may result in a auditory diagnosis that is not clear or perhaps incorrect. On the other hand, close and careful analysis of findings for a test battery consisting of objective procedures and behavioral tests whenever feasible usually leads to prompt and accurate diagnosis of auditory dysfunction. This paper provides a concise review of the crosscheck principle from its introduction to its clinical application today. The review concludes with a description of a modern test battery for pediatric hearing assessment that supplements traditional behavioral tests with a variety of independent objective procedures including aural immittance measures, otoacoustic emissions, and auditory evoked responses.

  6. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease

    PubMed Central

    Golden, Hannah L.; Agustus, Jennifer L.; Goll, Johanna C.; Downey, Laura E.; Mummery, Catherine J.; Schott, Jonathan M.; Crutch, Sebastian J.; Warren, Jason D.

    2015-01-01

    Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology. PMID:26029629

  7. Auditory Preferences of Young Children with and without Hearing Loss for Meaningful Auditory-Visual Compound Stimuli

    ERIC Educational Resources Information Center

    Zupan, Barbra; Sussman, Joan E.

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…

  8. The Influence of Selective and Divided Attention on Audiovisual Integration in Children.

    PubMed

    Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong

    2016-01-24

    This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.

  9. Intracranial mapping of auditory perception: Event-related responses and electrocortical stimulation

    PubMed Central

    Sinai, A.; Crone, N.E.; Wied, H.M.; Franaszczuk, P.J.; Miglioretti, D.; Boatman-Reich, D.

    2010-01-01

    Objective We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Methods Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. Results ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60 Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Conclusions Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. Significance These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping. PMID:19070540

  10. The neural basis of visual dominance in the context of audio-visual object processing.

    PubMed

    Schmid, Carmen; Büchel, Christian; Rose, Michael

    2011-03-01

    Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.

  11. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    PubMed

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  12. Nonlinear Processing of Auditory Brainstem Response

    DTIC Science & Technology

    2001-10-25

    Kraków, Poland Abstract: - Auditory brainstem response potentials (ABR) are signals calculated from the EEG signals registered as responses to an...acoustic activation of the auditory system. The ABR signals provide an objective, diagnostic method, widely applied in examinations of hearing organs

  13. Filling-in visual motion with sounds.

    PubMed

    Väljamäe, A; Soto-Faraco, S

    2008-10-01

    Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.

  14. Cerebral responses to local and global auditory novelty under general anesthesia

    PubMed Central

    Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir

    2017-01-01

    Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices. PMID:27502046

  15. EEG signatures accompanying auditory figure-ground segregation

    PubMed Central

    Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P.; Szerafin, Ágnes; Shinn-Cunningham, Barbara; Winkler, István

    2017-01-01

    In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased – i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. PMID:27421185

  16. Behavioural benefits of multisensory processing in ferrets.

    PubMed

    Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R

    2017-01-01

    Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  17. Selective attention in normal and impaired hearing.

    PubMed

    Shinn-Cunningham, Barbara G; Best, Virginia

    2008-12-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.

  18. Selective Attention in Normal and Impaired Hearing

    PubMed Central

    Shinn-Cunningham, Barbara G.; Best, Virginia

    2008-01-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202

  19. The auditory nerve overlapped waveform (ANOW): A new objective measure of low-frequency hearing

    NASA Astrophysics Data System (ADS)

    Lichtenhan, Jeffery T.; Salt, Alec N.; Guinan, John J.

    2015-12-01

    One of the most pressing problems today in the mechanics of hearing is to understand the mechanical motions in the apical half of the cochlea. Almost all available measurements from the cochlear apex of basilar membrane or other organ-of-Corti transverse motion have been made from ears where the health, or sensitivity, in the apical half of the cochlea was not known. A key step in understanding the mechanics of the cochlear base was to trust mechanical measurements only when objective measures from auditory-nerve compound action potentials (CAPs) showed good preparation sensitivity. However, such traditional objective measures are not adequate monitors of cochlear health in the very low-frequency regions of the apex that are accessible for mechanical measurements. To address this problem, we developed the Auditory Nerve Overlapped Waveform (ANOW) that originates from auditory nerve output in the apex. When responses from the round window to alternating low-frequency tones are averaged, the cochlear microphonic is canceled and phase-locked neural firing interleaves in time (i.e., overlaps). The result is a waveform that oscillates at twice the probe frequency. We have demonstrated that this Auditory Nerve Overlapped Waveform - called ANOW - originates from auditory nerve fibers in the cochlear apex [8], relates well to single-auditory-nerve-fiber thresholds, and can provide an objective estimate of low-frequency sensitivity [7]. Our new experiments demonstrate that ANOW is a highly sensitive indicator of apical cochlear function. During four different manipulations to the scala media along the cochlear spiral, ANOW amplitude changed when either no, or only small, changes occurred in CAP thresholds. Overall, our results demonstrate that ANOW can be used to monitor cochlear sensitivity of low-frequency regions during experiments that make apical basilar membrane motion measurements.

  20. Objective Fidelity Evaluation in Multisensory Virtual Environments: Auditory Cue Fidelity in Flight Simulation

    PubMed Central

    Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.

    2012-01-01

    We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068

  1. Audio-Visual, Visuo-Tactile and Audio-Tactile Correspondences in Preschoolers.

    PubMed

    Nava, Elena; Grassi, Massimo; Turati, Chiara

    2016-01-01

    Interest in crossmodal correspondences has recently seen a renaissance thanks to numerous studies in human adults. Yet, still very little is known about crossmodal correspondences in children, particularly in sensory pairings other than audition and vision. In the current study, we investigated whether 4-5-year-old children match auditory pitch to the spatial motion of visual objects (audio-visual condition). In addition, we investigated whether this correspondence extends to touch, i.e., whether children also match auditory pitch to the spatial motion of touch (audio-tactile condition) and the spatial motion of visual objects to touch (visuo-tactile condition). In two experiments, two different groups of children were asked to indicate which of two stimuli fitted best with a centrally located third stimulus (Experiment 1), or to report whether two presented stimuli fitted together well (Experiment 2). We found sensitivity to the congruency of all of the sensory pairings only in Experiment 2, suggesting that only under specific circumstances can these correspondences be observed. Our results suggest that pitch-height correspondences for audio-visual and audio-tactile combinations may still be weak in preschool children, and speculate that this could be due to immature linguistic and auditory cues that are still developing at age five.

  2. Hierarchical neurocomputations underlying concurrent sound segregation: connecting periphery to percept.

    PubMed

    Bidelman, Gavin M; Alain, Claude

    2015-02-01

    Natural soundscapes often contain multiple sound sources at any given time. Numerous studies have reported that in human observers, the perception and identification of concurrent sounds is paralleled by specific changes in cortical event-related potentials (ERPs). Although these studies provide a window into the cerebral mechanisms governing sound segregation, little is known about the subcortical neural architecture and hierarchy of neurocomputations that lead to this robust perceptual process. Using computational modeling, scalp-recorded brainstem/cortical ERPs, and human psychophysics, we demonstrate that a primary cue for sound segregation, i.e., harmonicity, is encoded at the auditory nerve level within tens of milliseconds after the onset of sound and is maintained, largely untransformed, in phase-locked activity of the rostral brainstem. As then indexed by auditory cortical responses, (in)harmonicity is coded in the signature and magnitude of the cortical object-related negativity (ORN) response (150-200 ms). The salience of the resulting percept is then captured in a discrete, categorical-like coding scheme by a late negativity response (N5; ~500 ms latency), just prior to the elicitation of a behavioral judgment. Subcortical activity correlated with cortical evoked responses such that weaker phase-locked brainstem responses (lower neural harmonicity) generated larger ORN amplitude, reflecting the cortical registration of multiple sound objects. Studying multiple brain indices simultaneously helps illuminate the mechanisms and time-course of neural processing underlying concurrent sound segregation and may lead to further development and refinement of physiologically driven models of auditory scene analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Neural correlates of auditory scene analysis and perception

    PubMed Central

    Cohen, Yale E.

    2014-01-01

    The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354

  4. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans.

    PubMed

    Cui, Zhuang; Wang, Qian; Gao, Yayue; Wang, Jing; Wang, Mengyang; Teng, Pengfei; Guan, Yuguang; Zhou, Jian; Li, Tianfu; Luan, Guoming; Li, Liang

    2017-01-01

    The arrival of sound signals in the auditory cortex (AC) triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC) and extrinsic functional connectivity (eFC) of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices). Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  5. An analysis of nonlinear dynamics underlying neural activity related to auditory induction in the rat auditory cortex.

    PubMed

    Noto, M; Nishikawa, J; Tateno, T

    2016-03-24

    A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self-exciting system is a key element for qualitatively reproducing A1 population activity and to understand the underlying mechanisms. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  6. EEG signatures accompanying auditory figure-ground segregation.

    PubMed

    Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P; Szerafin, Ágnes; Shinn-Cunningham, Barbara G; Winkler, István

    2016-11-01

    In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased - i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. Copyright © 2016. Published by Elsevier Inc.

  7. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    PubMed

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  8. Enhanced auditory temporal gap detection in listeners with musical training.

    PubMed

    Mishra, Srikanta K; Panda, Manas R; Herbert, Carolyn

    2014-08-01

    Many features of auditory perception are positively altered in musicians. Traditionally auditory mechanisms in musicians are investigated using the Western-classical musician model. The objective of the present study was to adopt an alternative model-Indian-classical music-to further investigate auditory temporal processing in musicians. This study presents that musicians have significantly lower across-channel gap detection thresholds compared to nonmusicians. Use of the South Indian musician model provides an increased external validity for the prediction, from studies on Western-classical musicians, that auditory temporal coding is enhanced in musicians.

  9. P50 Suppression in Children with Selective Mutism: A Preliminary Report

    ERIC Educational Resources Information Center

    Henkin, Yael; Feinholz, Maya; Arie, Miri; Bar-Haim, Yair

    2010-01-01

    Evidence suggests that children with selective mutism (SM) display significant aberrations in auditory efferent activity at the brainstem level that may underlie inefficient auditory processing during vocalization, and lead to speech avoidance. The objective of the present study was to explore auditory filtering processes at the cortical level in…

  10. Effects of emotionally charged auditory stimulation on gait performance in the elderly: a preliminary study.

    PubMed

    Rizzo, John-Ross; Raghavan, Preeti; McCrery, J R; Oh-Park, Mooyeon; Verghese, Joe

    2015-04-01

    To evaluate the effect of a novel divided attention task-walking under auditory constraints-on gait performance in older adults and to determine whether this effect was moderated by cognitive status. Validation cohort. General community. Ambulatory older adults without dementia (N=104). Not applicable. In this pilot study, we evaluated walking under auditory constraints in 104 older adults who completed 3 pairs of walking trials on a gait mat under 1 of 3 randomly assigned conditions: 1 pair without auditory stimulation and 2 pairs with emotionally charged auditory stimulation with happy or sad sounds. The mean age of subjects was 80.6±4.9 years, and 63% (n=66) were women. The mean velocity during normal walking was 97.9±20.6cm/s, and the mean cadence was 105.1±9.9 steps/min. The effect of walking under auditory constraints on gait characteristics was analyzed using a 2-factorial analysis of variance with a 1-between factor (cognitively intact and minimal cognitive impairment groups) and a 1-within factor (type of auditory stimuli). In both happy and sad auditory stimulation trials, cognitively intact older adults (n=96) showed an average increase of 2.68cm/s in gait velocity (F1.86,191.71=3.99; P=.02) and an average increase of 2.41 steps/min in cadence (F1.75,180.42=10.12; P<.001) as compared with trials without auditory stimulation. In contrast, older adults with minimal cognitive impairment (Blessed test score, 5-10; n=8) showed an average reduction of 5.45cm/s in gait velocity (F1.87,190.83=5.62; P=.005) and an average reduction of 3.88 steps/min in cadence (F1.79,183.10=8.21; P=.001) under both auditory stimulation conditions. Neither baseline fall history nor performance of activities of daily living accounted for these differences. Our results provide preliminary evidence of the differentiating effect of emotionally charged auditory stimuli on gait performance in older individuals with minimal cognitive impairment compared with those without minimal cognitive impairment. A divided attention task using emotionally charged auditory stimuli might be able to elicit compensatory improvement in gait performance in cognitively intact older individuals, but lead to decompensation in those with minimal cognitive impairment. Further investigation is needed to compare gait performance under this task to gait on other dual-task paradigms and to separately examine the effect of physiological aging versus cognitive impairment on gait during walking under auditory constraints. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  11. Brainstem auditory evoked responses in an equine patient population: part I--adult horses.

    PubMed

    Aleman, M; Holliday, T A; Nieto, J E; Williams, D C

    2014-01-01

    Brainstem auditory evoked response has been an underused diagnostic modality in horses as evidenced by few reports on the subject. To describe BAER findings, common clinical signs, and causes of hearing loss in adult horses. Study group, 76 horses; control group, 8 horses. Retrospective. BAER records from the Clinical Neurophysiology Laboratory were reviewed from the years of 1982 to 2013. Peak latencies, amplitudes, and interpeak intervals were measured when visible. Horses were grouped under disease categories. Descriptive statistics and a posthoc Bonferroni test were performed. Fifty-seven of 76 horses had BAER deficits. There was no breed or sex predisposition, with the exception of American Paint horses diagnosed with congenital sensorineural deafness. Eighty-six percent (n = 49/57) of the horses were younger than 16 years of age. The most common causes of BAER abnormalities were temporohyoid osteoarthropathy (THO, n = 20/20; abnormalities/total), congenital sensorineural deafness in Paint horses (17/17), multifocal brain disease (13/16), and otitis media/interna (4/4). Auditory loss was bilateral and unilateral in 74% (n = 42/57) and 26% (n = 15/57) of the horses, respectively. The most common causes of bilateral auditory loss were sensorineural deafness, THO, and multifocal brain disease whereas THO and otitis were the most common causes of unilateral deficits. Auditory deficits should be investigated in horses with altered behavior, THO, multifocal brain disease, otitis, and in horses with certain coat and eye color patterns. BAER testing is an objective and noninvasive diagnostic modality to assess auditory function in horses. Copyright © 2014 by the American College of Veterinary Internal Medicine.

  12. Auditory Outcomes with Hearing Rehabilitation in Children with Unilateral Hearing Loss: A Systematic Review.

    PubMed

    Appachi, Swathi; Specht, Jessica L; Raol, Nikhila; Lieu, Judith E C; Cohen, Michael S; Dedhia, Kavita; Anne, Samantha

    2017-10-01

    Objective Options for management of unilateral hearing loss (UHL) in children include conventional hearing aids, bone-conduction hearing devices, contralateral routing of signal (CROS) aids, and frequency-modulating (FM) systems. The objective of this study was to systematically review the current literature to characterize auditory outcomes of hearing rehabilitation options in UHL. Data Sources PubMed, EMBASE, Medline, CINAHL, and Cochrane Library were searched from inception to January 2016. Manual searches of bibliographies were also performed. Review Methods Studies analyzing auditory outcomes of hearing amplification in children with UHL were included. Outcome measures included functional and objective auditory results. Two independent reviewers evaluated each abstract and article. Results Of the 249 articles identified, 12 met inclusion criteria. Seven articles solely focused on outcomes with bone-conduction hearing devices. Outcomes favored improved pure-tone averages, speech recognition thresholds, and sound localization in implanted patients. Five studies focused on FM systems, conventional hearing aids, or CROS hearing aids. Limited data are available but suggest a trend toward improvement in speech perception with hearing aids. FM systems were shown to have the most benefit for speech recognition in noise. Studies evaluating CROS hearing aids demonstrated variable outcomes. Conclusions Data evaluating functional and objective auditory measures following hearing amplification in children with UHL are limited. Most studies do suggest improvement in speech perception, speech recognition in noise, and sound localization with a hearing rehabilitation device.

  13. Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss

    ERIC Educational Resources Information Center

    Koravand, Amineh; Jutras, Benoit

    2013-01-01

    Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…

  14. Memory for sound, with an ear toward hearing in complex auditory scenes.

    PubMed

    Snyder, Joel S; Gregg, Melissa K

    2011-10-01

    An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.

  15. Hidden Hearing Loss and Computational Models of the Auditory Pathway: Predicting Speech Intelligibility Decline

    DTIC Science & Technology

    2016-11-28

    of low spontaneous rate auditory nerve fibers (ANFs) and reduction of auditory brainstem response wave-I amplitudes. The goal of this research is...auditory nerve (AN) responses to speech stimuli under a variety of difficult listening conditions. The resulting cochlear neurogram, a spectrogram

  16. Superadditive responses in superior temporal sulcus predict audiovisual benefits in object categorization.

    PubMed

    Werner, Sebastian; Noppeney, Uta

    2010-08-01

    Merging information from multiple senses provides a more reliable percept of our environment. Yet, little is known about where and how various sensory features are combined within the cortical hierarchy. Combining functional magnetic resonance imaging and psychophysics, we investigated the neural mechanisms underlying integration of audiovisual object features. Subjects categorized or passively perceived audiovisual object stimuli with the informativeness (i.e., degradation) of the auditory and visual modalities being manipulated factorially. Controlling for low-level integration processes, we show higher level audiovisual integration selectively in the superior temporal sulci (STS) bilaterally. The multisensory interactions were primarily subadditive and even suppressive for intact stimuli but turned into additive effects for degraded stimuli. Consistent with the inverse effectiveness principle, auditory and visual informativeness determine the profile of audiovisual integration in STS similarly to the influence of physical stimulus intensity in the superior colliculus. Importantly, when holding stimulus degradation constant, subjects' audiovisual behavioral benefit predicts their multisensory integration profile in STS: only subjects that benefit from multisensory integration exhibit superadditive interactions, while those that do not benefit show suppressive interactions. In conclusion, superadditive and subadditive integration profiles in STS are functionally relevant and related to behavioral indices of multisensory integration with superadditive interactions mediating successful audiovisual object categorization.

  17. Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System

    PubMed Central

    Anderson, Lucy A.

    2016-01-01

    High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the findings suggest that auditory temporal processing deficits, such as impairments in gap-in-noise detection, could arise from reduced brain sensitivity to sound offsets alone. PMID:26865621

  18. Objective Metric Based Assessments for Efficient Evaluation of Auditory Situation Awareness Characteristics of Tactical Communications and Protective Systems (TCAPS) and Augmented Hearing Protective Devices (HPDs)

    DTIC Science & Technology

    2015-11-30

    Assessments for Efficient Evaluation of Auditory Situation Awareness Characteristics of Tactical Communications and Protective Systems (TCAPS) and Augmented...Hearing Protective Devices (HPDs) W81XWH-13-C-0193 John G. Casali, Ph.D, CPE & Kichol Lee, Ph.D Auditory Systems Lab, Industrial and Systems ...Suite 1 JBSA Lackland, TX 78236-9908 Approved for public release: distribution unlimited. The Virginia Tech Auditory Systems Laboratory (ASL

  19. Spatial learning while navigating with severely degraded viewing: The role of attention and mobility monitoring

    PubMed Central

    Rand, Kristina M.; Creem-Regehr, Sarah H.; Thompson, William B.

    2015-01-01

    The ability to navigate without getting lost is an important aspect of quality of life. In five studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low-vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2 participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared to degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared to degraded vision. With degraded vision, auditory task performance was better when guided compared to unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing. PMID:25706766

  20. Processing of frequency-modulated sounds in the lateral auditory belt cortex of the rhesus monkey.

    PubMed

    Tian, Biao; Rauschecker, Josef P

    2004-11-01

    Single neurons were recorded from the lateral belt areas, anterolateral (AL), mediolateral (ML), and caudolateral (CL), of nonprimary auditory cortex in 4 adult rhesus monkeys under gas anesthesia, while the neurons were stimulated with frequency-modulated (FM) sweeps. Responses to FM sweeps, measured as the firing rate of the neurons, were invariably greater than those to tone bursts. In our stimuli, frequency changed linearly from low to high frequencies (FM direction "up") or high to low frequencies ("down") at varying speeds (FM rates). Neurons were highly selective to the rate and direction of the FM sweep. Significant differences were found between the 3 lateral belt areas with regard to their FM rate preferences: whereas neurons in ML responded to the whole range of FM rates, AL neurons responded better to slower FM rates in the range of naturally occurring communication sounds. CL neurons generally responded best to fast FM rates at a speed of several hundred Hz/ms, which have the broadest frequency spectrum. These selectivities are consistent with a role of AL in the decoding of communication sounds and of CL in the localization of sounds, which works best with broader bandwidths. Together, the results support the hypothesis of parallel streams for the processing of different aspects of sounds, including auditory objects and auditory space.

  1. The Perception of Concurrent Sound Objects in Harmonic Complexes Impairs Gap Detection

    ERIC Educational Resources Information Center

    Leung, Ada W. S.; Jolicoeur, Pierre; Vachon, Francois; Alain, Claude

    2011-01-01

    Since the introduction of the concept of auditory scene analysis, there has been a paucity of work focusing on the theoretical explanation of how attention is allocated within a complex auditory scene. Here we examined signal detection in situations that promote either the fusion of tonal elements into a single sound object or the segregation of a…

  2. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex123

    PubMed Central

    Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198

  3. Acoustic facilitation of object movement detection during self-motion

    PubMed Central

    Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.

    2011-01-01

    In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050

  4. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    PubMed

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  5. The Effect of Noise on the Relationship between Auditory Working Memory and Comprehension in School-Age Children

    ERIC Educational Resources Information Center

    Sullivan, Jessica R.; Osman, Homira; Schafer, Erin C.

    2015-01-01

    Purpose: The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Method: Children with normal hearing between the ages…

  6. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    PubMed

    Stone, Scott A; Tata, Matthew S

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  7. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality

    PubMed Central

    Tata, Matthew S.

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518

  8. Broadened population-level frequency tuning in the auditory cortex of tinnitus patients.

    PubMed

    Sekiya, Kenichi; Takahashi, Mariko; Murakami, Shingo; Kakigi, Ryusuke; Okamoto, Hidehiko

    2017-03-01

    Tinnitus is a phantom auditory perception without an external sound source and is one of the most common public health concerns that impair the quality of life of many individuals. However, its neural mechanisms remain unclear. We herein examined population-level frequency tuning in the auditory cortex of unilateral tinnitus patients with similar hearing levels in both ears using magnetoencephalography. We compared auditory-evoked neural activities elicited by a stimulation to the tinnitus and nontinnitus ears. Objective magnetoencephalographic data suggested that population-level frequency tuning corresponding to the tinnitus ear was significantly broader than that corresponding to the nontinnitus ear in the human auditory cortex. The results obtained support the hypothesis that pathological alterations in inhibitory neural networks play an important role in the perception of subjective tinnitus. NEW & NOTEWORTHY Although subjective tinnitus is one of the most common public health concerns that impair the quality of life of many individuals, no standard treatment or objective diagnostic method currently exists. We herein revealed that population-level frequency tuning was significantly broader in the tinnitus ear than in the nontinnitus ear. The results of the present study provide an insight into the development of an objective diagnostic method for subjective tinnitus. Copyright © 2017 the American Physiological Society.

  9. Developmental changes in distinguishing concurrent auditory objects.

    PubMed

    Alain, Claude; Theunissen, Eef L; Chevalier, Hélène; Batty, Magali; Taylor, Margot J

    2003-04-01

    Children have considerable difficulties in identifying speech in noise. In the present study, we examined age-related differences in central auditory functions that are crucial for parsing co-occurring auditory events using behavioral and event-related brain potential measures. Seventeen pre-adolescent children and 17 adults were presented with complex sounds containing multiple harmonics, one of which could be 'mistuned' so that it was no longer an integer multiple of the fundamental. Both children and adults were more likely to report hearing the mistuned harmonic as a separate sound with an increase in mistuning. However, children were less sensitive in detecting mistuning across all levels as revealed by lower d' scores than adults. The perception of two concurrent auditory events was accompanied by a negative wave that peaked at about 160 ms after sound onset. In both age groups, the negative wave, referred to as the 'object-related negativity' (ORN), increased in amplitude with mistuning. The ORN was larger in children than in adults despite a lower d' score. Together, the behavioral and electrophysiological results suggest that concurrent sound segregation is probably adult-like in pre-adolescent children, but that children are inefficient in processing the information following the detection of mistuning. These findings also suggest that processes involved in distinguishing concurrent auditory objects continue to mature during adolescence.

  10. Common and differential electrophysiological mechanisms underlying semantic object memory retrieval probed by features presented in different stimulus types.

    PubMed

    Chiang, Hsueh-Sheng; Eroh, Justin; Spence, Jeffrey S; Motes, Michael A; Maguire, Mandy J; Krawczyk, Daniel C; Brier, Matthew R; Hart, John; Kraut, Michael A

    2016-08-01

    How the brain combines the neural representations of features that comprise an object in order to activate a coherent object memory is poorly understood, especially when the features are presented in different modalities (visual vs. auditory) and domains (verbal vs. nonverbal). We examined this question using three versions of a modified Semantic Object Retrieval Test, where object memory was probed by a feature presented as a written word, a spoken word, or a picture, followed by a second feature always presented as a visual word. Participants indicated whether each feature pair elicited retrieval of the memory of a particular object. Sixteen subjects completed one of the three versions (N=48 in total) while their EEG were recorded simultaneously. We analyzed EEG data in four separate frequency bands (delta: 1-4Hz, theta: 4-7Hz; alpha: 8-12Hz; beta: 13-19Hz) using a multivariate data-driven approach. We found that alpha power time-locked to response was modulated by both cross-modality (visual vs. auditory) and cross-domain (verbal vs. nonverbal) probing of semantic object memory. In addition, retrieval trials showed greater changes in all frequency bands compared to non-retrieval trials across all stimulus types in both response-locked and stimulus-locked analyses, suggesting dissociable neural subcomponents involved in binding object features to retrieve a memory. We conclude that these findings support both modality/domain-dependent and modality/domain-independent mechanisms during semantic object memory retrieval. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Crossmodal association of auditory and visual material properties in infants.

    PubMed

    Ujiie, Yuta; Yamashita, Wakayo; Fujisaki, Waka; Kanazawa, So; Yamaguchi, Masami K

    2018-06-18

    The human perceptual system enables us to extract visual properties of an object's material from auditory information. In monkeys, the neural basis underlying such multisensory association develops through experience of exposure to a material; material information could be processed in the posterior inferior temporal cortex, progressively from the high-order visual areas. In humans, however, the development of this neural representation remains poorly understood. Here, we demonstrated for the first time the presence of a mapping of the auditory material property with visual material ("Metal" and "Wood") in the right temporal region in preverbal 4- to 8-month-old infants, using near-infrared spectroscopy (NIRS). Furthermore, we found that infants acquired the audio-visual mapping for a property of the "Metal" material later than for the "Wood" material, since infants form the visual property of "Metal" material after approximately 6 months of age. These findings indicate that multisensory processing of material information induces the activation of brain areas related to sound symbolism. Our findings also indicate that the material's familiarity might facilitate the development of multisensory processing during the first year of life.

  12. Modality-specificity of Selective Attention Networks

    PubMed Central

    Stewart, Hannah J.; Amitay, Sygal

    2015-01-01

    Objective: To establish the modality specificity and generality of selective attention networks. Method: Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. Results: The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled “general attention.” The third component was labeled “auditory attention,” as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as “spatial orienting” and “spatial conflict,” respectively—they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task—all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). Conclusions: These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific. PMID:26635709

  13. The RetroX auditory implant for high-frequency hearing loss.

    PubMed

    Garin, P; Genard, F; Galle, C; Jamart, J

    2004-07-01

    The objective of this study was to analyze the subjective satisfaction and measure the hearing gain provided by the RetroX (Auric GmbH, Rheine, Germany), an auditory implant of the external ear. We conducted a retrospective case review. We conducted this study at a tertiary referral center at a university hospital. We studied 10 adults with high-frequency sensori-neural hearing loss (ski-slope audiogram). The RetroX consists of an electronic unit sited in the postaural sulcus connected to a titanium tube implanted under the auricle between the sulcus and the entrance of the external auditory canal. Implanting requires only minor surgery under local anesthesia. Main outcome measures were a satisfaction questionnaire, pure-tone audiometry in quiet, speech audiometry in quiet, speech audiometry in noise, and azimuth audiometry (hearing threshold in function of sound source location within the horizontal plane at ear level). : Subjectively, all 10 patients are satisfied or even extremely satisfied with the hearing improvement provided by the RetroX. They wear the implant daily, from morning to evening. We observe a statistically significant improvement of pure-tone thresholds at 1, 2, and 4 kHz. In quiet, the speech reception threshold improves by 9 dB. Speech audiometry in noise shows that intelligibility improves by 26% for a signal-to-noise ratio of -5 dB, by 18% for a signal-to-noise ratio of 0 dB, and by 13% for a signal-to-noise ratio of +5 dB. Localization audiometry indicates that the skull masks sound contralateral to the implanted ear. Of the 10 patients, one had acoustic feedback and one presented with a granulomatous reaction to the foreign body that necessitated removing the implant. The RetroX auditory implant is a semi-implantable hearing aid without occlusion of the external auditory canal. It provides a new therapeutic alternative for managing high-frequency hearing loss.

  14. Hierarchical Processing of Auditory Objects in Humans

    PubMed Central

    Kumar, Sukhbinder; Stephan, Klaas E; Warren, Jason D; Friston, Karl J; Griffiths, Timothy D

    2007-01-01

    This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG), containing the primary auditory cortex, planum temporale (PT), and superior temporal sulcus (STS), and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal “templates” in the PT before further analysis of the abstracted form in anterior temporal lobe areas. PMID:17542641

  15. Comparable mechanisms of working memory interference by auditory and visual motion in youth and aging

    PubMed Central

    Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam

    2013-01-01

    Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task i nterruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. PMID:23791629

  16. Gap prepulse inhibition and auditory brainstem-evoked potentials as objective measures for tinnitus in guinea pigs

    PubMed Central

    Dehmel, Susanne; Eisinger, Daniel; Shore, Susan E.

    2012-01-01

    Tinnitus or ringing of the ears is a subjective phantom sensation necessitating behavioral models that objectively demonstrate the existence and quality of the tinnitus sensation. The gap detection test uses the acoustic startle response elicited by loud noise pulses and its gating or suppression by preceding sub-startling prepulses. Gaps in noise bands serve as prepulses, assuming that ongoing tinnitus masks the gap and results in impaired gap detection. This test has shown its reliability in rats, mice, and gerbils. No data exists for the guinea pig so far, although gap detection is similar across mammals and the acoustic startle response is a well-established tool in guinea pig studies of psychiatric disorders and in pharmacological studies. Here we investigated the startle behavior and prepulse inhibition (PPI) of the guinea pig and showed that guinea pigs have a reliable startle response that can be suppressed by 15 ms gaps embedded in narrow noise bands preceding the startle noise pulse. After recovery of auditory brainstem response (ABR) thresholds from a unilateral noise over-exposure centered at 7 kHz, guinea pigs showed diminished gap-induced reduction of the startle response in frequency bands between 8 and 18 kHz. This suggests the development of tinnitus in frequency regions that showed a temporary threshold shift (TTS) after noise over-exposure. Changes in discharge rate and synchrony, two neuronal correlates of tinnitus, should be reflected in altered ABR waveforms, which would be useful to objectively detect tinnitus and its localization to auditory brainstem structures. Therefore, we analyzed latencies and amplitudes of the first five ABR waves at suprathreshold sound intensities and correlated ABR abnormalities with the results of the behavioral tinnitus testing. Early ABR wave amplitudes up to N3 were increased for animals with tinnitus possibly stemming from hyperactivity and hypersynchrony underlying the tinnitus percept. Animals that did not develop tinnitus after noise exposure showed the opposite effect, a decrease in wave amplitudes for the later waves P4–P5. Changes in latencies were only observed in tinnitus animals, which showed increased latencies. Thus, tinnitus-induced changes in the discharge activity of the auditory nerve and central auditory nuclei are represented in the ABR. PMID:22666193

  17. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  18. Sound effects: Multimodal input helps infants find displaced objects.

    PubMed

    Shinskey, Jeanne L

    2017-09-01

    Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is displaced to B, especially following a delay between hiding and retrieval. Experiment 1 manipulated auditory input by keeping the hidden object audible versus silent, and visual input by presenting the delay in the light versus dark. Infants succeeded more at B with audible than silent objects and, unexpectedly, more after delays in the light than dark. Experiment 2 presented both the delay and search phases in darkness. The unexpected light-dark difference disappeared. Across experiments, the presence of auditory input helped infants find displaced objects, whereas the absence of visual input did not. Sound might help by strengthening object representation, reducing memory load, or focusing attention. This work provides new evidence on when bimodal input aids object processing, corroborates claims that audiovisual processing improves over the first year of life, and contributes to multisensory approaches to studying cognition. Statement of contribution What is already known on this subject Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion. This suggests they find auditory input more salient in the absence of visual input in simple search tasks. After 9 months, infants' object processing appears more sensitive to multimodal (e.g., audiovisual) input. What does this study add? This study tested how audiovisual input affects 10-month-olds' search for an object displaced in an AB task. Sound helped infants find displaced objects in both the presence and absence of visual input. Object processing becomes more sensitive to bimodal input as multisensory functions develop across the first year. © 2016 The British Psychological Society.

  19. Study of tonotopic brain changes with functional MRI and FDG-PET in a patient with unilateral objective cochlear tinnitus.

    PubMed

    Guinchard, A-C; Ghazaleh, Naghmeh; Saenz, M; Fornari, E; Prior, J O; Maeder, P; Adib, S; Maire, R

    2016-11-01

    We studied possible brain changes with functional MRI (fMRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) in a patient with a rare, high-intensity "objective tinnitus" (high-level SOAEs) in the left ear of 10 years duration, with no associated hearing loss. This is the first case of objective cochlear tinnitus to be investigated with functional neuroimaging. The objective cochlear tinnitus was measured by Spontaneous Otoacoustic Emissions (SOAE) equipment (frequency 9689 Hz, intensity 57 dB SPL) and is clearly audible to anyone standing near the patient. Functional modifications in primary auditory areas and other brain regions were evaluated using 3T and 7T fMRI and FDG-PET. In the fMRI evaluations, a saturation of the auditory cortex at the tinnitus frequency was observed, but the global cortical tonotopic organization remained intact when compared to the results of fMRI of healthy subjects. The FDG-PET showed no evidence of an increase or decrease of activity in the auditory cortices or in the limbic system as compared to normal subjects. In this patient with high-intensity objective cochlear tinnitus, fMRI and FDG-PET showed no significant brain reorganization in auditory areas and/or in the limbic system, as reported in the literature in patients with chronic subjective tinnitus. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. The perception of coherent and non-coherent auditory objects: a signature in gamma frequency band.

    PubMed

    Knief, A; Schulte, M; Bertran, O; Pantev, C

    2000-07-01

    The pertinence of gamma band activity in magnetoencephalographic and electroencephalographic recordings for the performance of a gestalt recognition process is a question at issue. We investigated the functional relevance of gamma band activity for the perception of auditory objects. An auditory experiment was performed as an analog to the Kanizsa experiment in the visual modality, comprising four different coherent and non-coherent stimuli. For the first time functional differences of evoked gamma band activity due to the perception of these stimuli were demonstrated by various methods (localization of sources, wavelet analysis and independent component analysis, ICA). Responses to coherent stimuli were found to have more features in common compared to non-coherent stimuli (e.g. closer located sources and smaller number of ICA components). The results point to the existence of a pitch processor in the auditory pathway.

  1. The effect of background music on the taste of wine.

    PubMed

    North, Adrian C

    2012-08-01

    Research concerning cross-modal influences on perception has neglected auditory influences on perceptions of non-auditory objects, although a small number of studies indicate that auditory stimuli can influence perceptions of the freshness of foodstuffs. Consistent with this, the results reported here indicate that independent groups' ratings of the taste of the wine reflected the emotional connotations of the background music played while they drank it. These results indicate that the symbolic function of auditory stimuli (in this case music) may influence perception in other modalities (in this case gustation); and are discussed in terms of possible future research that might investigate those aspects of music that induce such effects in a particular manner, and how such effects might be influenced by participants' pre-existing knowledge and expertise with regard to the target object in question. ©2011 The British Psychological Society.

  2. Auditory pathways: anatomy and physiology.

    PubMed

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.

  3. Auditory agnosia.

    PubMed

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  4. Development of the auditory system

    PubMed Central

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  5. [Clinical diagnosis of Treacher Collins syndrome and the efficacy of using BAHA].

    PubMed

    Wang, Y B; Chen, X W; Wang, P; Fan, X M; Fan, Y; Liu, Q; Gao, Z Q

    2017-04-20

    Objective: To evaluate the efficacy of soft or implanted BAHA in the patients of Treacher Collins syndrome(TCS). Method: Six patients of TCS were studied. The Teber scoring system was used to evaluate the deformity degree. The air and bone auditory thresholds were assessed by auditory brain stem response(ABR). The infant-toddler meaningful auditory integration scale(IT-MAIS) was used to assess the auditory development at three time levels: baseline,3 months and 6 months. The hearing threshold and speech recognition score were measured under unaided and aided conditions. Result: The average score of deformity degree was 14.0±0.6. The TCOF1 gene was tested in two patients. The bone conduction hearing thresholds of patients was(18.0±4.5)dBnHL and the air conduction hearing thresholds was (70.5±7.0)dBnHL. The IT-MAIS total, detection and perception scores were improved significantly after wearing softband BAHA and approached the normal level in the 2 patients under 2 years old. The hearing thresholds of 6 patients in unaided and softband BAHA conditions were(65.8±3.8)dBHL and (30.0±3.2)dBHL ( P <0.01) respectively, and 1 implanted BAHA was 15 dBHL. The speech recognition scores of 3 patients in unaided and softband BAHA conditions were(31.7±3.5)% and(86.0±1.7)%( P <0.05) respectively, and 1 implanted BAHA was 96%. Conclusion: Whenever the patient was diagnosed as TCS by the clinical manifestations and genetic testing, BAHA system could help to rehabilitate the hearing to a normal condition. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.

  6. Virtual acoustics displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-01-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  7. Virtual acoustics displays

    NASA Astrophysics Data System (ADS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-03-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  8. Monitoring auditory cortical plasticity in hearing aid users with long latency auditory evoked potentials: a longitudinal study

    PubMed Central

    Leite, Renata Aparecida; Magliaro, Fernanda Cristina Leite; Raimundo, Jeziela Cristina; Bento, Ricardo Ferreira; Matas, Carla Gentile

    2018-01-01

    OBJECTIVE: The objective of this study was to compare long-latency auditory evoked potentials before and after hearing aid fittings in children with sensorineural hearing loss compared with age-matched children with normal hearing. METHODS: Thirty-two subjects of both genders aged 7 to 12 years participated in this study and were divided into two groups as follows: 14 children with normal hearing were assigned to the control group (mean age 9 years and 8 months), and 18 children with mild to moderate symmetrical bilateral sensorineural hearing loss were assigned to the study group (mean age 9 years and 2 months). The children underwent tympanometry, pure tone and speech audiometry and long-latency auditory evoked potential testing with speech and tone burst stimuli. The groups were assessed at three time points. RESULTS: The study group had a lower percentage of positive responses, lower P1-N1 and P2-N2 amplitudes (speech and tone burst), and increased latencies for the P1 and P300 components following the tone burst stimuli. They also showed improvements in long-latency auditory evoked potentials (with regard to both the amplitude and presence of responses) after hearing aid use. CONCLUSIONS: Alterations in the central auditory pathways can be identified using P1-N1 and P2-N2 amplitude components, and the presence of these components increases after a short period of auditory stimulation (hearing aid use). These findings emphasize the importance of using these amplitude components to monitor the neuroplasticity of the central auditory nervous system in hearing aid users. PMID:29466495

  9. Pure word deafness with auditory object agnosia after bilateral lesion of the superior temporal sulcus.

    PubMed

    Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies

    2015-12-01

    Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Elevated audiovisual temporal interaction in patients with migraine without aura

    PubMed Central

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  11. Tinnitus Intensity Dependent Gamma Oscillations of the Contralateral Auditory Cortex

    PubMed Central

    van der Loo, Elsa; Gais, Steffen; Congedo, Marco; Vanneste, Sven; Plazier, Mark; Menovsky, Tomas; Van de Heyning, Paul; De Ridder, Dirk

    2009-01-01

    Background Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. Methods and Findings In unilateral tinnitus patients (N = 15; 10 right, 5 left) source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05). Conclusion Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception. PMID:19816597

  12. Facilitation of listening comprehension by visual information under noisy listening condition

    NASA Astrophysics Data System (ADS)

    Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi

    2009-02-01

    Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.

  13. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses

    PubMed Central

    Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli

    2015-01-01

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing. PMID:26658858

  14. Unbound Bilirubin and Auditory Neuropathy Spectrum Disorder in Late Preterm and Term Infants with Severe Jaundice

    PubMed Central

    Amin, Sanjiv B; Wang, Hongyue; Laroia, Nirupama; Orlando, Mark

    2016-01-01

    Objective To evaluate if unbound bilirubin is a better predictor of auditory neuropathy spectrum disorder (ANSD) than total serum bilirubin (TSB) or the bilirubin albumin molar ratio (BAMR) in late preterm and term neonates with severe jaundice (TSB ≥ 20 mg/dL or TSB that met exchange transfusion criteria). Study design Infants ≥ 34 weeks gestational age with severe jaundice during the first two weeks of life were eligible for the prospective observational study. A comprehensive auditory evaluation was performed within 72 hours of peak TSB. ANSD was defined as absent or abnormal auditory brainstem evoked response waveform morphology at 80 decibel click intensity in the presence of normal outer hair cell function. TSB, serum albumin, and unbound bilirubin were measured using the colorimetric, bromocresol green, and modified peroxidase method, respectively. Results Five of 44 infants developed ANSD. By logistic regression, peak unbound bilirubin but not peak TSB or peak BAMR was associated with ANSD (odds ratio 4.6, 95% CI: 1.6-13.5, p = 0.002). On comparing receiver operating characteristic curves, the area under the curve (AUC) for unbound bilirubin (0.92) was significantly greater (p = 0.04) compared with the AUC for TSB (0.50) or BAMR (0.62). Conclusions Unbound bilirubin is a more sensitive and specific predictor of ANSD than TSB or BAMR in late preterm and term infants with severe jaundice. PMID:26952116

  15. Strength of German accent under altered auditory feedback

    PubMed Central

    HOWELL, PETER; DWORZYNSKI, KATHARINA

    2007-01-01

    Borden’s (1979, 1980) hypothesis that speakers with vulnerable speech systems rely more heavily on feedback monitoring than do speakers with less vulnerable systems was investigated. The second language (L2) of a speaker is vulnerable, in comparison with the native language, so alteration to feedback should have a detrimental effect on it, according to this hypothesis. Here, we specifically examined whether altered auditory feedback has an effect on accent strength when speakers speak L2. There were three stages in the experiment. First, 6 German speakers who were fluent in English (their L2) were recorded under six conditions—normal listening, amplified voice level, voice shifted in frequency, delayed auditory feedback, and slowed and accelerated speech rate conditions. Second, judges were trained to rate accent strength. Training was assessed by whether it was successful in separating German speakers speaking English from native English speakers, also speaking English. In the final stage, the judges ranked recordings of each speaker from the first stage as to increasing strength of German accent. The results show that accents were more pronounced under frequency-shifted and delayed auditory feedback conditions than under normal or amplified feedback conditions. Control tests were done to ensure that listeners were judging accent, rather than fluency changes caused by altered auditory feedback. The findings are discussed in terms of Borden’s hypothesis and other accounts about why altered auditory feedback disrupts speech control. PMID:11414137

  16. Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children.

    PubMed

    Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.

  17. Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children

    PubMed Central

    Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10–40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89–98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442

  18. Preattentive representation of feature conjunctions for concurrent spatially distributed auditory objects.

    PubMed

    Takegata, Rika; Brattico, Elvira; Tervaniemi, Mari; Varyagina, Olga; Näätänen, Risto; Winkler, István

    2005-09-01

    The role of attention in conjoining features of an object has been a topic of much debate. Studies using the mismatch negativity (MMN), an index of detecting acoustic deviance, suggested that the conjunctions of auditory features are preattentively represented in the brain. These studies, however, used sequentially presented sounds and thus are not directly comparable with visual studies of feature integration. Therefore, the current study presented an array of spatially distributed sounds to determine whether the auditory features of concurrent sounds are correctly conjoined without focal attention directed to the sounds. Two types of sounds differing from each other in timbre and pitch were repeatedly presented together while subjects were engaged in a visual n-back working-memory task and ignored the sounds. Occasional reversals of the frequent pitch-timbre combinations elicited MMNs of a very similar amplitude and latency irrespective of the task load. This result suggested preattentive integration of auditory features. However, performance in a subsequent target-search task with the same stimuli indicated the occurrence of illusory conjunctions. The discrepancy between the results obtained with and without focal attention suggests that illusory conjunctions may occur during voluntary access to the preattentively encoded object representations.

  19. Parallel perceptual enhancement and hierarchic relevance evaluation in an audio-visual conjunction task.

    PubMed

    Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E

    2008-10-21

    Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.

  20. Localization of virtual sound at 4 Gz.

    PubMed

    Sandor, Patrick M B; McAnally, Ken I; Pellieux, Lionel; Martin, Russell L

    2005-02-01

    Acceleration directed along the body's z-axis (Gz) leads to misperception of the elevation of visual objects (the "elevator illusion"), most probably as a result of errors in the transformation from eye-centered to head-centered coordinates. We have investigated whether the location of sound sources is misperceived under increased Gz. Visually guided localization responses were made, using a remotely controlled laser pointer, to virtual auditory targets under conditions of 1 and 4 Gz induced in a human centrifuge. As these responses would be expected to be affected by the elevator illusion, we also measured the effect of Gz on the accuracy with which subjects could point to the horizon. Horizon judgments were lower at 4 Gz than at 1 Gz, so sound localization responses at 4 Gz were corrected for this error in the transformation from eye-centered to head-centered coordinates. We found that the accuracy and bias of sound localization are not significantly affected by increased Gz. The auditory modality is likely to provide a reliable means of conveying spatial information to operators in dynamic environments in which Gz can vary.

  1. Using Facebook to Reach People Who Experience Auditory Hallucinations

    PubMed Central

    Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. Methods We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Results Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience auditory hallucinations. Women, people who use mobile phones, and those experiencing more distress, were reportedly more open to using Facebook as a support and/or therapeutic tool in the future. Conclusions Facebook advertisements can be used to recruit research participants who experience auditory hallucinations quickly and in a cost-effective manner. Most (58%) Web-based respondents are open to Facebook-based support and treatment and are willing to describe their subjective experiences with auditory hallucinations. PMID:27302017

  2. Auditory motion-specific mechanisms in the primate brain

    PubMed Central

    Baumann, Simon; Dheerendra, Pradeep; Joly, Olivier; Hunter, David; Balezeau, Fabien; Sun, Li; Rees, Adrian; Petkov, Christopher I.; Thiele, Alexander; Griffiths, Timothy D.

    2017-01-01

    This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream. PMID:28472038

  3. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    NASA Astrophysics Data System (ADS)

    Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica

    2005-12-01

    This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  4. An association between auditory-visual synchrony processing and reading comprehension: Behavioral and electrophysiological evidence

    PubMed Central

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2016-01-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060

  5. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    PubMed

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  6. Feature-based and object-based attention orientation during short-term memory maintenance.

    PubMed

    Ku, Yixuan

    2015-12-01

    Top-down attention biases the short-term memory (STM) processing at multiple stages. Orienting attention during the maintenance period of STM by a retrospective cue (retro-cue) strengthens the representation of the cued item and improves the subsequent STM performance. In a recent article, Backer et al. (Backer KC, Binns MA, Alain C. J Neurosci 35: 1307-1318, 2015) extended these findings from the visual to the auditory domain and combined electroencephalography to dissociate neural mechanisms underlying feature-based and object-based attention orientation. Both event-related potentials and neural oscillations explained the behavioral benefits of retro-cues and favored the theory that feature-based and object-based attention orientation were independent. Copyright © 2015 the American Physiological Society.

  7. Auditory Temporal Resolution in Individuals with Diabetes Mellitus Type 2.

    PubMed

    Mishra, Rajkishor; Sanju, Himanshu Kumar; Kumar, Prawin

    2016-10-01

    Introduction  "Diabetes mellitus is a group of metabolic disorders characterized by elevated blood sugar and abnormalities in insulin secretion and action" (American Diabetes Association). Previous literature has reported connection between diabetes mellitus and hearing impairment. There is a dearth of literature on auditory temporal resolution ability in individuals with diabetes mellitus type 2. Objective  The main objective of the present study was to assess auditory temporal resolution ability through GDT (Gap Detection Threshold) in individuals with diabetes mellitus type 2 with high frequency hearing loss. Methods  Fifteen subjects with diabetes mellitus type 2 with high frequency hearing loss in the age range of 30 to 40 years participated in the study as the experimental group. Fifteen age-matched non-diabetic individuals with normal hearing served as the control group. We administered the Gap Detection Threshold (GDT) test to all participants to assess their temporal resolution ability. Result  We used the independent t -test to compare between groups. Results showed that the diabetic group (experimental) performed significantly poorer compared with the non-diabetic group (control). Conclusion  It is possible to conclude that widening of auditory filters and changes in the central auditory nervous system contributed to poorer performance for temporal resolution task (Gap Detection Threshold) in individuals with diabetes mellitus type 2. Findings of the present study revealed the deteriorating effect of diabetes mellitus type 2 at the central auditory processing level.

  8. Brainstem Correlates of Temporal Auditory Processing in Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Basu, Madhavi; Krishnan, Ananthanarayan; Weber-Fox, Christine

    2010-01-01

    Deficits in identification and discrimination of sounds with short inter-stimulus intervals or short formant transitions in children with specific language impairment (SLI) have been taken to reflect an underlying temporal auditory processing deficit. Using the sustained frequency following response (FFR) and the onset auditory brainstem responses…

  9. Head-Up Auditory Displays for Traffic Collision Avoidance System Advisories: A Preliminary Investigation

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1993-01-01

    The advantage of a head-up auditory display was evaluated in a preliminary experiment designed to measure and compare the acquisition time for capturing visual targets under two auditory conditions: standard one-earpiece presentation and two-earpiece three-dimensional (3D) audio presentation. Twelve commercial airline crews were tested under full mission simulation conditions at the NASA-Ames Man-Vehicle Systems Research Facility advanced concepts flight simulator. Scenario software generated visual targets corresponding to aircraft that would activate a traffic collision avoidance system (TCAS) aural advisory; the spatial auditory position was linked to the visual position with 3D audio presentation. Results showed that crew members using a 3D auditory display acquired targets approximately 2.2 s faster than did crew members who used one-earpiece head- sets, but there was no significant difference in the number of targets acquired.

  10. System and algorithm for evaluation of human auditory analyzer state

    NASA Astrophysics Data System (ADS)

    Bachynskiy, Mykhaylo V.; Azarkhov, Oleksandr Yu.; Shtofel, Dmytro Kh.; Horbatiuk, Svitlana M.; Ławicki, Tomasz; Kalizhanova, Aliya; Smailova, Saule; Askarova, Nursanat

    2017-08-01

    The paper discusses questions of human auditory state evaluation with technical means. It considers the disadvantages of existing clinical audiometry methods and systems. It is proposed to use method for evaluating of auditory analyzer state by means of pulsometry to get the medical study more objective and efficient. It provides for use of two optoelectronic sensors located on the carotid artery and ear lobe, Using this method the biotechnical system for evaluation and stimulation of human auditory analyzer stare wad developed. Its hardware and software were substantiated. Different modes of simulation in the designed system were tested and the influence of the procedure on a patient was studied.

  11. A P300 event related potential technique for assessment of sexually oriented interest.

    PubMed

    Vardi, Yoram; Volos, Michal; Sprecher, Elliot; Granovsky, Yelena; Gruenwald, Ilan; Yarnitsky, David

    2006-12-01

    Despite all of the modern, sophisticated tests that exist for diagnosing and assessing male and female sexual disorders, to our knowledge there is no objective psychophysiological test to evaluate sexual arousal and interest. We provide preliminary data showing a decrease in auditory P300 wave amplitude during exposure to sexually explicit video clips and a significant correlation between the auditory P300 amplitude decrease and self-reported scores of sexual arousal and interest in the clips. A total of 30 healthy subjects were exposed to several blocks of auditory stimuli administered using an oddball paradigm. Baseline auditory P300 amplitudes were obtained and auditory stimuli were then delivered while viewing visual clips with 3 types of content, including sport, scenery and sex. Auditory P300 amplitude significantly decreased during viewing clips of all contents. Viewing sexual content clips caused a maximal decrease in P300 amplitude (p <0.0001). In addition, a high correlation was found between the amplitude decrease and scores on the sexual arousal questionnaire regarding the viewed clips (r = 0.61, p <0.001). In addition, the P300 amplitude decrease was significantly related to the sexual interest score (r = 0.37, p = 0.042) but not to interest in clips of nonsexual content. The change in auditory P300 amplitude during exposure to visual stimuli with sexual context seems to be an objective measure of subject sexual interest. This method might be applied to assess therapeutic intervention and as a diagnostic tool for assessing disorders of impaired libido or psychogenic sexual dysfunction.

  12. The importance of individual frequencies of endogenous brain oscillations for auditory cognition - A short review.

    PubMed

    Baltus, Alina; Herrmann, Christoph Siegfried

    2016-06-01

    Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Objective quantification of the tinnitus decompensation by synchronization measures of auditory evoked single sweeps.

    PubMed

    Strauss, Daniel J; Delb, Wolfgang; D'Amelio, Roberto; Low, Yin Fen; Falkai, Peter

    2008-02-01

    Large-scale neural correlates of the tinnitus decompensation might be used for an objective evaluation of therapies and neurofeedback based therapeutic approaches. In this study, we try to identify large-scale neural correlates of the tinnitus decompensation using wavelet phase stability criteria of single sweep sequences of late auditory evoked potentials as synchronization stability measure. The extracted measure provided an objective quantification of the tinnitus decompensation and allowed for a reliable discrimination between a group of compensated and decompensated tinnitus patients. We provide an interpretation for our results by a neural model of top-down projections based on the Jastreboff tinnitus model combined with the adaptive resonance theory which has not been applied to model tinnitus so far. Using this model, our stability measure of evoked potentials can be linked to the focus of attention on the tinnitus signal. It is concluded that the wavelet phase stability of late auditory evoked potential single sweeps might be used as objective tinnitus decompensation measure and can be interpreted in the framework of the Jastreboff tinnitus model and adaptive resonance theory.

  14. Cortical neurons of bats respond best to echoes from nearest targets when listening to natural biosonar multi-echo streams.

    PubMed

    Beetz, M Jerome; Hechavarría, Julio C; Kössl, Manfred

    2016-10-27

    Bats orientate in darkness by listening to echoes from their biosonar calls, a behaviour known as echolocation. Recent studies showed that cortical neurons respond in a highly selective manner when stimulated with natural echolocation sequences that contain echoes from single targets. However, it remains unknown how cortical neurons process echolocation sequences containing echo information from multiple objects. In the present study, we used echolocation sequences containing echoes from three, two or one object separated in the space depth as stimuli to study neuronal activity in the bat auditory cortex. Neuronal activity was recorded with multi-electrode arrays placed in the dorsal auditory cortex, where neurons tuned to target-distance are found. Our results show that target-distance encoding neurons are mostly selective to echoes coming from the closest object, and that the representation of echo information from distant objects is selectively suppressed. This suppression extends over a large part of the dorsal auditory cortex and may override possible parallel processing of multiple objects. The presented data suggest that global cortical suppression might establish a cortical "default mode" that allows selectively focusing on close obstacle even without active attention from the animals.

  15. Cortical neurons of bats respond best to echoes from nearest targets when listening to natural biosonar multi-echo streams

    PubMed Central

    Beetz, M. Jerome; Hechavarría, Julio C.; Kössl, Manfred

    2016-01-01

    Bats orientate in darkness by listening to echoes from their biosonar calls, a behaviour known as echolocation. Recent studies showed that cortical neurons respond in a highly selective manner when stimulated with natural echolocation sequences that contain echoes from single targets. However, it remains unknown how cortical neurons process echolocation sequences containing echo information from multiple objects. In the present study, we used echolocation sequences containing echoes from three, two or one object separated in the space depth as stimuli to study neuronal activity in the bat auditory cortex. Neuronal activity was recorded with multi-electrode arrays placed in the dorsal auditory cortex, where neurons tuned to target-distance are found. Our results show that target-distance encoding neurons are mostly selective to echoes coming from the closest object, and that the representation of echo information from distant objects is selectively suppressed. This suppression extends over a large part of the dorsal auditory cortex and may override possible parallel processing of multiple objects. The presented data suggest that global cortical suppression might establish a cortical “default mode” that allows selectively focusing on close obstacle even without active attention from the animals. PMID:27786252

  16. Stable individual characteristics in the perception of multiple embedded patterns in multistable auditory stimuli

    PubMed Central

    Denham, Susan; Bõhm, Tamás M.; Bendixen, Alexandra; Szalárdy, Orsolya; Kocsis, Zsuzsanna; Mill, Robert; Winkler, István

    2014-01-01

    The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the “ABA-” auditory streaming paradigm we trained listeners until they could reliably recognize all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated). Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e., the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing) perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in human perception. PMID:24616656

  17. Stable individual characteristics in the perception of multiple embedded patterns in multistable auditory stimuli.

    PubMed

    Denham, Susan; Bõhm, Tamás M; Bendixen, Alexandra; Szalárdy, Orsolya; Kocsis, Zsuzsanna; Mill, Robert; Winkler, István

    2014-01-01

    The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the "ABA-" auditory streaming paradigm we trained listeners until they could reliably recognize all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated). Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e., the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing) perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in human perception.

  18. Neural Representation of Concurrent Harmonic Sounds in Monkey Primary Auditory Cortex: Implications for Models of Auditory Scene Analysis

    PubMed Central

    Steinschneider, Mitchell; Micheyl, Christophe

    2014-01-01

    The ability to attend to a particular sound in a noisy environment is an essential aspect of hearing. To accomplish this feat, the auditory system must segregate sounds that overlap in frequency and time. Many natural sounds, such as human voices, consist of harmonics of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0. A difference in pitch between simultaneous HCTs provides a powerful cue for their segregation. The neural mechanisms underlying concurrent sound segregation based on pitch differences are poorly understood. Here, we examined neural responses in monkey primary auditory cortex (A1) to two concurrent HCTs that differed in F0 such that they are heard as two separate “auditory objects” with distinct pitches. We found that A1 can resolve, via a rate-place code, the lower harmonics of both HCTs, a prerequisite for deriving their pitches and for their perceptual segregation. Onset asynchrony between the HCTs enhanced the neural representation of their harmonics, paralleling their improved perceptual segregation in humans. Pitches of the concurrent HCTs could also be temporally represented by neuronal phase-locking at their respective F0s. Furthermore, a model of A1 responses using harmonic templates could qualitatively reproduce psychophysical data on concurrent sound segregation in humans. Finally, we identified a possible intracortical homolog of the “object-related negativity” recorded noninvasively in humans, which correlates with the perceptual segregation of concurrent sounds. Findings indicate that A1 contains sufficient spectral and temporal information for segregating concurrent sounds based on differences in pitch. PMID:25209282

  19. Gap-induced reductions of evoked potentials in the auditory cortex: A possible objective marker for the presence of tinnitus in animals.

    PubMed

    Berger, Joel I; Owen, William; Wilson, Caroline A; Hockley, Adam; Coomber, Ben; Palmer, Alan R; Wallace, Mark N

    2018-01-15

    Animal models of tinnitus are essential for determining the underlying mechanisms and testing pharmacotherapies. However, there is doubt over the validity of current behavioural methods for detecting tinnitus. Here, we applied a stimulus paradigm widely used in a behavioural test (gap-induced inhibition of the acoustic startle reflex GPIAS) whilst recording from the auditory cortex, and showed neural response changes that mirror those found in the behavioural tests. We implanted guinea pigs (GPs) with electrocorticographic (ECoG) arrays and recorded baseline auditory cortical responses to a startling stimulus. When a gap was inserted in otherwise continuous background noise prior to the startling stimulus, there was a clear reduction in the subsequent evoked response (termed gap-induced reductions in evoked potentials; GIREP), suggestive of a neural analogue of the GPIAS test. We then unilaterally exposed guinea pigs to narrowband noise (left ear; 8-10 kHz; 1 h) at one of two different sound levels - either 105 dB SPL or 120 dB SPL - and recorded the same responses seven-to-ten weeks following the noise exposure. Significant deficits in GIREP were observed for all areas of the auditory cortex (AC) in the 120 dB-exposed GPs, but not in the 105 dB-exposed GPs. These deficits could not simply be accounted for by changes in response amplitudes. Furthermore, in the contralateral (right) caudal AC we observed a significant increase in evoked potential amplitudes across narrowband background frequencies in both 105 dB and 120 dB-exposed GPs. Taken in the context of the large body of literature that has used the behavioural test as a demonstration of the presence of tinnitus, these results are suggestive of objective neural correlates of the presence of noise-induced tinnitus and hyperacusis. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  20. A trade-off between somatosensory and auditory related brain activity during object naming but not reading.

    PubMed

    Seghier, Mohamed L; Hope, Thomas M H; Prejawa, Susan; Parker Jones, 'Ōiwi; Vitkovitch, Melanie; Price, Cathy J

    2015-03-18

    The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying "1-2-3." Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying "1-2-3" and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level. Copyright © 2015 Seghier et al.

  1. Cultivating Empathy for the Mentally Ill Using Simulated Auditory Hallucinations

    ERIC Educational Resources Information Center

    Bunn, William; Terpstra, Jan

    2009-01-01

    Objective: The authors address the issue of cultivating medical students' empathy for the mentally ill by examining medical student empathy pre- and postsimulated auditory hallucination experience. Methods: At the University of Utah, 150 medical students participated in this study during their 6-week psychiatry rotation. The Jefferson Scale of…

  2. Effects of Auditory Distraction on Cognitive Processing of Young Adults

    ERIC Educational Resources Information Center

    LaPointe, Leonard L.; Heald, Gary R.; Stierwalt, Julie A. G.; Kemker, Brett E.; Maurice, Trisha

    2007-01-01

    Objective: The effects of interference, competition, and distraction on cognitive processing are unclearly understood, particularly regarding type and intensity of auditory distraction across a variety of cognitive processing tasks. Method: The purpose of this investigation was to report two experiments that sought to explore the effects of types…

  3. Perceptual Learning Style and Learning Proficiency: A Test of the Hypothesis

    ERIC Educational Resources Information Center

    Kratzig, Gregory P.; Arbuthnott, Katherine D.

    2006-01-01

    Given the potential importance of using modality preference with instruction, the authors tested whether learning style preference correlated with memory performance in each of 3 sensory modalities: visual, auditory, and kinesthetic. In Study 1, participants completed objective measures of pictorial, auditory, and tactile learning and learning…

  4. Reduced auditory processing capacity during vocalization in children with Selective Mutism.

    PubMed

    Arie, Miri; Henkin, Yael; Lamy, Dominique; Tetin-Schneider, Simona; Apter, Alan; Sadeh, Avi; Bar-Haim, Yair

    2007-02-01

    Because abnormal Auditory Efferent Activity (AEA) is associated with auditory distortions during vocalization, we tested whether auditory processing is impaired during vocalization in children with Selective Mutism (SM). Participants were children with SM and abnormal AEA, children with SM and normal AEA, and normally speaking controls, who had to detect aurally presented target words embedded within word lists under two conditions: silence (single task), and while vocalizing (dual task). To ascertain specificity of auditory-vocal deficit, effects of concurrent vocalizing were also examined during a visual task. Children with SM and abnormal AEA showed impaired auditory processing during vocalization relative to children with SM and normal AEA, and relative to control children. This impairment is specific to the auditory modality and does not reflect difficulties in dual task per se. The data extends previous findings suggesting that deficient auditory processing is involved in speech selectivity in SM.

  5. Human cortical responses to slow and fast binaural beats reveal multiple mechanisms of binaural hearing.

    PubMed

    Ross, Bernhard; Miyazaki, Takahiro; Thompson, Jessica; Jamali, Shahab; Fujioka, Takako

    2014-10-15

    When two tones with slightly different frequencies are presented to both ears, they interact in the central auditory system and induce the sensation of a beating sound. At low difference frequencies, we perceive a single sound, which is moving across the head between the left and right ears. The percept changes to loudness fluctuation, roughness, and pitch with increasing beat rate. To examine the neural representations underlying these different perceptions, we recorded neuromagnetic cortical responses while participants listened to binaural beats at a continuously varying rate between 3 Hz and 60 Hz. Binaural beat responses were analyzed as neuromagnetic oscillations following the trajectory of the stimulus rate. Responses were largest in the 40-Hz gamma range and at low frequencies. Binaural beat responses at 3 Hz showed opposite polarity in the left and right auditory cortices. We suggest that this difference in polarity reflects the opponent neural population code for representing sound location. Binaural beats at any rate induced gamma oscillations. However, the responses were largest at 40-Hz stimulation. We propose that the neuromagnetic gamma oscillations reflect postsynaptic modulation that allows for precise timing of cortical neural firing. Systematic phase differences between bilateral responses suggest that separate sound representations of a sound object exist in the left and right auditory cortices. We conclude that binaural processing at the cortical level occurs with the same temporal acuity as monaural processing whereas the identification of sound location requires further interpretation and is limited by the rate of object representations. Copyright © 2014 the American Physiological Society.

  6. The role of auditory and kinaesthetic feedback mechanisms on phonatory stability in children.

    PubMed

    Rathna Kumar, S B; Azeem, Suhail; Choudhary, Abhishek Kumar; Prakash, S G R

    2013-12-01

    Auditory feedback plays an important role in phonatory control. When auditory feedback is disrupted, various changes are observed in vocal motor control. Vocal intensity and fundamental frequency (F0) levels tend to increase in response to auditory masking. Because of the close reflexive links between the auditory and phonatory systems, it is likely that phonatory stability may be disrupted when auditory feedback is disrupted or altered. However, studies on phonatory stability under auditory masking condition in adult subjects showed that most of the subjects maintained normal levels of phonatory stability. The authors in the earlier investigations suggested that auditory feedback is not the sole contributor to vocal motor control and phonatory stability, a complex neuromuscular reflex system known as kinaesthetic feedback may play a role in controlling phonatory stability when auditory feedback is disrupted or lacking. This proposes the need to further investigate this phenomenon as to whether children show similar patterns of phonatory stability under auditory masking since their neuromotor systems are still at developmental stage, less mature and are less resistant to altered auditory feedback than adults. A total of 40 normal hearing and speaking children (20 male and 20 female) between the age group of 6 and 8 years participated as subjects. The acoustic parameters such as shimmer, jitter and harmonic-to-noise ratio (HNR) were measures and compared between no masking condition (0 dB ML) and masking condition (90 dB ML). Despite the neuromotor systems being less mature in children and less resistant than adults to altered auditory feedback, most of the children in the study demonstrated increased phonatory stability which was reflected by reduced shimmer, jitter and increased HNR values. This study implicates that most of the children demonstrate well established patterns of kinaesthetic feedback, which might have allowed them to maintain normal levels of vocal motor control even in the presence of disturbed auditory feedback. Hence, it can be concluded that children also exhibit kinaesthetic feedback mechanism to control phonatory stability when auditory feedback is disrupted which in turn highlights the importance of kinaesthetic feedback to be included in the therapeutic/intervention approaches for children with hearing and neurogenic speech deficits.

  7. The Corticofugal Effects of Auditory Cortex Microstimulation on Auditory Nerve and Superior Olivary Complex Responses Are Mediated via Alpha-9 Nicotinic Receptor Subunit

    PubMed Central

    Aedo, Cristian; Terreros, Gonzalo; León, Alex; Delano, Paul H.

    2016-01-01

    Background and Objective The auditory efferent system is a complex network of descending pathways, which mainly originate in the primary auditory cortex and are directed to several auditory subcortical nuclei. These descending pathways are connected to olivocochlear neurons, which in turn make synapses with auditory nerve neurons and outer hair cells (OHC) of the cochlea. The olivocochlear function can be studied using contralateral acoustic stimulation, which suppresses auditory nerve and cochlear responses. In the present work, we tested the proposal that the corticofugal effects that modulate the strength of the olivocochlear reflex on auditory nerve responses are produced through cholinergic synapses between medial olivocochlear (MOC) neurons and OHCs via alpha-9/10 nicotinic receptors. Methods We used wild type (WT) and alpha-9 nicotinic receptor knock-out (KO) mice, which lack cholinergic transmission between MOC neurons and OHC, to record auditory cortex evoked potentials and to evaluate the consequences of auditory cortex electrical microstimulation in the effects produced by contralateral acoustic stimulation on auditory brainstem responses (ABR). Results Auditory cortex evoked potentials at 15 kHz were similar in WT and KO mice. We found that auditory cortex microstimulation produces an enhancement of contralateral noise suppression of ABR waves I and III in WT mice but not in KO mice. On the other hand, corticofugal modulations of wave V amplitudes were significant in both genotypes. Conclusion These findings show that the corticofugal modulation of contralateral acoustic suppressions of auditory nerve (ABR wave I) and superior olivary complex (ABR wave III) responses are mediated through MOC synapses. PMID:27195498

  8. Reduced auditory efferent activity in childhood selective mutism.

    PubMed

    Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava

    2004-06-01

    Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.

  9. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  10. Auditory preferences of young children with and without hearing loss for meaningful auditory-visual compound stimuli.

    PubMed

    Zupan, Barbra; Sussman, Joan E

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.

  11. A vestibular phenotype for Waardenburg syndrome?

    NASA Technical Reports Server (NTRS)

    Black, F. O.; Pesznecker, S. C.; Allen, K.; Gianna, C.

    2001-01-01

    OBJECTIVE: To investigate vestibular abnormalities in subjects with Waardenburg syndrome. STUDY DESIGN: Retrospective record review. SETTING: Tertiary referral neurotology clinic. SUBJECTS: Twenty-two adult white subjects with clinical diagnosis of Waardenburg syndrome (10 type I and 12 type II). INTERVENTIONS: Evaluation for Waardenburg phenotype, history of vestibular and auditory symptoms, tests of vestibular and auditory function. MAIN OUTCOME MEASURES: Results of phenotyping, results of vestibular and auditory symptom review (history), results of vestibular and auditory function testing. RESULTS: Seventeen subjects were women, and 5 were men. Their ages ranged from 21 to 58 years (mean, 38 years). Sixteen of the 22 subjects sought treatment for vertigo, dizziness, or imbalance. For subjects with vestibular symptoms, the results of vestibuloocular tests (calorics, vestibular autorotation, and/or pseudorandom rotation) were abnormal in 77%, and the results of vestibulospinal function tests (computerized dynamic posturography, EquiTest) were abnormal in 57%, but there were no specific patterns of abnormality. Six had objective sensorineural hearing loss. Thirteen had an elevated summating/action potential (>0.40) on electrocochleography. All subjects except those with severe hearing loss (n = 3) had normal auditory brainstem response results. CONCLUSION: Patients with Waardenburg syndrome may experience primarily vestibular symptoms without hearing loss. Electrocochleography and vestibular function tests appear to be the most sensitive measures of otologic abnormalities in such patients.

  12. Recent advances in exploring the neural underpinnings of auditory scene perception

    PubMed Central

    Snyder, Joel S.; Elhilali, Mounya

    2017-01-01

    Studies of auditory scene analysis have traditionally relied on paradigms using artificial sounds—and conventional behavioral techniques—to elucidate how we perceptually segregate auditory objects or streams from each other. In the past few decades, however, there has been growing interest in uncovering the neural underpinnings of auditory segregation using human and animal neuroscience techniques, as well as computational modeling. This largely reflects the growth in the fields of cognitive neuroscience and computational neuroscience and has led to new theories of how the auditory system segregates sounds in complex arrays. The current review focuses on neural and computational studies of auditory scene perception published in the past few years. Following the progress that has been made in these studies, we describe (1) theoretical advances in our understanding of the most well-studied aspects of auditory scene perception, namely segregation of sequential patterns of sounds and concurrently presented sounds; (2) the diversification of topics and paradigms that have been investigated; and (3) how new neuroscience techniques (including invasive neurophysiology in awake humans, genotyping, and brain stimulation) have been used in this field. PMID:28199022

  13. Multisensory connections of monkey auditory cerebral cortex

    PubMed Central

    Smiley, John F.; Falchier, Arnaud

    2009-01-01

    Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628

  14. The Color-Word Interference Test and Its Relation to Performance Impairment under Auditory Distraction.

    ERIC Educational Resources Information Center

    Thackray, Richard I.; And Others

    The ability to resist distraction is an important requirement for air traffic controllers. The study examined the relationship between performance on the Stroop color-word interference test (a suggested measure of distraction susceptibility) and impairment under auditory distraction on a task requiring the subject to generate random sequences of…

  15. [Value of cumulative electrodermal responses in subliminal auditory perception. A preliminary study].

    PubMed

    Borgeat, F; Pannetier, M F

    1982-01-01

    This exploratory study examined the usefulness of averaging electrodermal potential responses for research on subliminal auditory perception. Eighteen female subjects were exposed to three kinds (emotional, neutral and 1000 Hz tone) of auditory stimulation which were repeated six times at three intensities (detection threshold, 10 dB under this threshold and 10 dB above identification threshold). Analysis of electrodermal potential responses showed that the number of responses was related to the emotionality of subliminal stimuli presented at detection threshold but not at 10 dB under it. The data interpretation proposed refers to perceptual defence theory. This study indicates that electrodermal response count constitutes a useful measure for subliminal auditory perception research, but averaging those responses was not shown to bring additional information.

  16. Objective measures of binaural masking level differences and comodulation masking release based on late auditory evoked potentials.

    PubMed

    Epp, Bastian; Yasin, Ifat; Verhey, Jesko L

    2013-12-01

    The audibility of important sounds is often hampered due to the presence of other masking sounds. The present study investigates if a correlate of the audibility of a tone masked by noise is found in late auditory evoked potentials measured from human listeners. The audibility of the target sound at a fixed physical intensity is varied by introducing auditory cues of (i) interaural target signal phase disparity and (ii) coherent masker level fluctuations in different frequency regions. In agreement with previous studies, psychoacoustical experiments showed that both stimulus manipulations result in a masking release (i: binaural masking level difference; ii: comodulation masking release) compared to a condition where those cues are not present. Late auditory evoked potentials (N1, P2) were recorded for the stimuli at a constant masker level, but different signal levels within the same set of listeners who participated in the psychoacoustical experiment. The data indicate differences in N1 and P2 between stimuli with and without interaural phase disparities. However, differences for stimuli with and without coherent masker modulation were only found for P2, i.e., only P2 is sensitive to the increase in audibility, irrespective of the cue that caused the masking release. The amplitude of P2 is consistent with the psychoacoustical finding of an addition of the masking releases when both cues are present. Even though it cannot be concluded where along the auditory pathway the audibility is represented, the P2 component of auditory evoked potentials is a candidate for an objective measure of audibility in the human auditory system. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Temporal lobe networks supporting the comprehension of spoken words.

    PubMed

    Bonilha, Leonardo; Hillis, Argye E; Hickok, Gregory; den Ouden, Dirk B; Rorden, Chris; Fridriksson, Julius

    2017-09-01

    Auditory word comprehension is a cognitive process that involves the transformation of auditory signals into abstract concepts. Traditional lesion-based studies of stroke survivors with aphasia have suggested that neocortical regions adjacent to auditory cortex are primarily responsible for word comprehension. However, recent primary progressive aphasia and normal neurophysiological studies have challenged this concept, suggesting that the left temporal pole is crucial for word comprehension. Due to its vasculature, the temporal pole is not commonly completely lesioned in stroke survivors and this heterogeneity may have prevented its identification in lesion-based studies of auditory comprehension. We aimed to resolve this controversy using a combined voxel-based-and structural connectome-lesion symptom mapping approach, since cortical dysfunction after stroke can arise from cortical damage or from white matter disconnection. Magnetic resonance imaging (T1-weighted and diffusion tensor imaging-based structural connectome), auditory word comprehension and object recognition tests were obtained from 67 chronic left hemisphere stroke survivors. We observed that damage to the inferior temporal gyrus, to the fusiform gyrus and to a white matter network including the left posterior temporal region and its connections to the middle temporal gyrus, inferior temporal gyrus, and cingulate cortex, was associated with word comprehension difficulties after factoring out object recognition. These results suggest that the posterior lateral and inferior temporal regions are crucial for word comprehension, serving as a hub to integrate auditory and conceptual processing. Early processing linking auditory words to concepts is situated in posterior lateral temporal regions, whereas additional and deeper levels of semantic processing likely require more anterior temporal regions.10.1093/brain/awx169_video1awx169media15555638084001. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Evaluation of Hearing in Children with Autism by Using TEOAE and ABR

    ERIC Educational Resources Information Center

    Tas, Abdullah; Yagiz, Recep; Tas, Memduha; Esme, Meral; Uzun, Cem; Karasalihoglu, Ahmet Rifat

    2007-01-01

    Assessment of auditory abilities is important in the diagnosis and treatment of children with autism. The aim was to evaluate hearing objectively by using transient evoked otoacoustic emission (TEOAE) and auditory brainstem response (ABR). Tests were performed on 30 children with autism and 15 typically developing children, following otomicroscopy…

  19. The Effects of Auditory Information on 4-Month-Old Infants' Perception of Trajectory Continuity

    ERIC Educational Resources Information Center

    Bremner, J. Gavin; Slater, Alan M.; Johnson, Scott P.; Mason, Uschi C.; Spring, Jo

    2012-01-01

    Young infants perceive an object's trajectory as continuous across occlusion provided the temporal or spatial gap in perception is small. In 3 experiments involving 72 participants the authors investigated the effects of different forms of auditory information on 4-month-olds' perception of trajectory continuity. Provision of dynamic auditory…

  20. Working Memory for Patterned Sequences of Auditory Objects in a Songbird

    ERIC Educational Resources Information Center

    Comins, Jordan A.; Gentner, Timothy Q.

    2010-01-01

    The capacity to remember sequences is critical to many behaviors, such as navigation and communication. Adult humans readily recall the serial order of auditory items, and this ability is commonly understood to support, in part, the speech processing for language comprehension. Theories of short-term serial recall posit either use of absolute…

  1. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability

    PubMed Central

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-01-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  2. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS)

    PubMed Central

    Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom sound perception and potentially serve as an objective measure of central neural pathology. PMID:28604786

  3. Behavioral Signs of (Central) Auditory Processing Disorder in Children With Nonsyndromic Cleft Lip and/or Palate: A Parental Questionnaire Approach.

    PubMed

    Ma, Xiaoran; McPherson, Bradley; Ma, Lian

    2016-03-01

    Objective Children with nonsyndromic cleft lip and/or palate often have a high prevalence of middle ear dysfunction. However, there are also indications that they may have a higher prevalence of (central) auditory processing disorder. This study used Fisher's Auditory Problems Checklist for caregivers to determine whether children with nonsyndromic cleft lip and/or palate have potentially more auditory processing difficulties compared with craniofacially normal children. Methods Caregivers of 147 school-aged children with nonsyndromic cleft lip and/or palate were recruited for the study. This group was divided into three subgroups: cleft lip, cleft palate, and cleft lip and palate. Caregivers of 60 craniofacially normal children were recruited as a control group. Hearing health tests were conducted to evaluate peripheral hearing. Caregivers of children who passed this assessment battery completed Fisher's Auditory Problems Checklist, which contains 25 questions related to behaviors linked to (central) auditory processing disorder. Results Children with cleft palate showed the lowest scores on the Fisher's Auditory Problems Checklist questionnaire, consistent with a higher index of suspicion for (central) auditory processing disorder. There was a significant difference in the manifestation of (central) auditory processing disorder-linked behaviors between the cleft palate and the control groups. The most common behaviors reported in the nonsyndromic cleft lip and/or palate group were short attention span and reduced learning motivation, along with hearing difficulties in noise. Conclusion A higher occurrence of (central) auditory processing disorder-linked behaviors were found in children with nonsyndromic cleft lip and/or palate, particularly cleft palate. Auditory processing abilities should not be ignored in children with nonsyndromic cleft lip and/or palate, and it is necessary to consider assessment tests for (central) auditory processing disorder when an auditory diagnosis is made for this population.

  4. Superior voice recognition in a patient with acquired prosopagnosia and object agnosia.

    PubMed

    Hoover, Adria E N; Démonet, Jean-François; Steeves, Jennifer K E

    2010-11-01

    Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Broad attention to multiple individual objects may facilitate change detection with complex auditory scenes.

    PubMed

    Irsik, Vanessa C; Vanden Bosch der Nederlanden, Christina M; Snyder, Joel S

    2016-11-01

    Attention and other processing constraints limit the perception of objects in complex scenes, which has been studied extensively in the visual sense. We used a change deafness paradigm to examine how attention to particular objects helps and hurts the ability to notice changes within complex auditory scenes. In a counterbalanced design, we examined how cueing attention to particular objects affected performance in an auditory change-detection task through the use of valid or invalid cues and trials without cues (Experiment 1). We further examined how successful encoding predicted change-detection performance using an object-encoding task and we addressed whether performing the object-encoding task along with the change-detection task affected performance overall (Experiment 2). Participants had more error for invalid compared to valid and uncued trials, but this effect was reduced in Experiment 2 compared to Experiment 1. When the object-encoding task was present, listeners who completed the uncued condition first had less overall error than those who completed the cued condition first. All participants showed less change deafness when they successfully encoded change-relevant compared to irrelevant objects during valid and uncued trials. However, only participants who completed the uncued condition first also showed this effect during invalid cue trials, suggesting a broader scope of attention. These findings provide converging evidence that attention to change-relevant objects is crucial for successful detection of acoustic changes and that encouraging broad attention to multiple objects is the best way to reduce change deafness. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. The influence of the Japanese waving cat on the joint spatial compatibility effect: A replication and extension of Dolk, Hommel, Prinz, and Liepelt (2013).

    PubMed

    Puffe, Lydia; Dittrich, Kerstin; Klauer, Karl Christoph

    2017-01-01

    In a joint go/no-go Simon task, each of two participants is to respond to one of two non-spatial stimulus features by means of a spatially lateralized response. Stimulus position varies horizontally and responses are faster and more accurate when response side and stimulus position match (compatible trial) than when they mismatch (incompatible trial), defining the social Simon effect or joint spatial compatibility effect. This effect was originally explained in terms of action/task co-representation, assuming that the co-actor's action is automatically co-represented. Recent research by Dolk, Hommel, Prinz, and Liepelt (2013) challenged this account by demonstrating joint spatial compatibility effects in a task-setting in which non-social objects like a Japanese waving cat were present, but no real co-actor. They postulated that every sufficiently salient object induces joint spatial compatibility effects. However, what makes an object sufficiently salient is so far not well defined. To scrutinize this open question, the current study manipulated auditory and/or visual attention-attracting cues of a Japanese waving cat within an auditory (Experiment 1) and a visual joint go/no-go Simon task (Experiment 2). Results revealed that joint spatial compatibility effects only occurred in an auditory Simon task when the cat provided auditory cues while no joint spatial compatibility effects were found in a visual Simon task. This demonstrates that it is not the sufficiently salient object alone that leads to joint spatial compatibility effects but instead, a complex interaction between features of the object and the stimulus material of the joint go/no-go Simon task.

  7. Selective verbal recognition memory impairments are associated with atrophy of the language network in non-semantic variants of primary progressive aphasia.

    PubMed

    Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J

    2017-06-01

    Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Conceptual Coherence Affects Phonological Activation of Context Objects during Object Naming

    ERIC Educational Resources Information Center

    Oppermann, Frank; Jescheniak, Jorg D.; Schriefers, Herbert

    2008-01-01

    In 4 picture-word interference experiments, speakers named a target object that was presented with a context object. Using auditory distractors that were phonologically related or unrelated either to the target object or the context object, the authors assessed whether phonological processing was confined to the target object or not. Phonological…

  9. The effect of postsurgical pain on attentional processing in horses.

    PubMed

    Dodds, Louise; Knight, Laura; Allen, Kate; Murrell, Joanna

    2017-07-01

    To investigate the effect of postsurgical pain on the performance of horses in a novel object and auditory startle task. Prospective clinical study. Twenty horses undergoing different types of surgery and 16 control horses that did not undergo surgery. The interaction of 36 horses with novel objects and a response to an auditory stimulus were measured at two time points; the day before surgery (T1) and the day after surgery (T2) for surgical horses (G1), and at a similar time interval for control horses (G2). Pain and sedation were measured using simple descriptive scales at the time the tests were carried out. Total time or score attributed to each of the behavioural categories was compared between groups (G1 and G2) for each test and between tests (T1 and T2) for each group. The median (range) time spent interacting with novel objects was reduced in G1 from 58 (6-367) seconds in T1 to 12 (0-495) seconds in T2 (p=0.0005). In G2 the change in interaction time between T1 and T2 was not statistically significant. Median (range) total auditory score was 7 (3-12) and 10 (1-12) in G1 and G2, respectively, at T1, decreasing to 6 (0-10) in G1 after surgery and 9.5 (1-12) in G2 (p=0.0003 and p=0.94, respectively). There was a difference in total auditory score between G1 and G2 at T2 (p=0.0169), with the score being lower in G1 than G2. Postsurgical pain negatively impacts attention towards novel objects and causes a decreased responsiveness to an auditory startle test. In horses, tasks demanding attention may be useful as a biomarker of pain. Copyright © 2017 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and Analgesia. All rights reserved.

  10. Robust decoding of selective auditory attention from MEG in a competing-speaker environment via state-space modeling✩

    PubMed Central

    Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash

    2015-01-01

    The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490

  11. Auditory Cortical Processing in Real-World Listening: The Auditory System Going Real

    PubMed Central

    Bizley, Jennifer; Shamma, Shihab A.; Wang, Xiaoqin

    2014-01-01

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. PMID:25392481

  12. Auditory cortical processing in real-world listening: the auditory system going real.

    PubMed

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.

  13. Neurophysiological mechanisms of cortical plasticity impairments in schizophrenia and modulation by the NMDA receptor agonist D-serine

    PubMed Central

    Kantrowitz, Joshua T.; Epstein, Michael L.; Beggel, Odeta; Rohrig, Stephanie; Lehrfeld, Jonathan M.; Revheim, Nadine; Lehrfeld, Nayla P.; Reep, Jacob; Parker, Emily; Silipo, Gail; Ahissar, Merav; Javitt, Daniel C.

    2016-01-01

    Schizophrenia is associated with deficits in cortical plasticity that affect sensory brain regions and lead to impaired cognitive performance. Here we examined underlying neural mechanisms of auditory plasticity deficits using combined behavioural and neurophysiological assessment, along with neuropharmacological manipulation targeted at the N-methyl-D-aspartate type glutamate receptor (NMDAR). Cortical plasticity was assessed in a cohort of 40 schizophrenia/schizoaffective patients relative to 42 healthy control subjects using a fixed reference tone auditory plasticity task. In a second cohort (n = 21 schizophrenia/schizoaffective patients, n = 13 healthy controls), event-related potential and event-related time–frequency measures of auditory dysfunction were assessed during administration of the NMDAR agonist d-serine. Mismatch negativity was used as a functional read-out of auditory-level function. Clinical trials registration numbers were NCT01474395/NCT02156908. Schizophrenia/schizoaffective patients showed significantly reduced auditory plasticity versus healthy controls (P = 0.001) that correlated with measures of cognitive, occupational and social dysfunction. In event-related potential/time-frequency analyses, patients showed highly significant reductions in sensory N1 that reflected underlying impairments in θ responses (P < 0.001), along with reduced θ and β-power modulation during retention and motor-preparation intervals. Repeated administration of d-serine led to intercorrelated improvements in (i) auditory plasticity (P < 0.001); (ii) θ-frequency response (P < 0.05); and (iii) mismatch negativity generation to trained versus untrained tones (P = 0.02). Schizophrenia/schizoaffective patients show highly significant deficits in auditory plasticity that contribute to cognitive, occupational and social dysfunction. d-serine studies suggest first that NMDAR dysfunction may contribute to underlying cortical plasticity deficits and, second, that repeated NMDAR agonist administration may enhance cortical plasticity in schizophrenia. PMID:27913408

  14. The role of primary auditory and visual cortices in temporal processing: A tDCS approach.

    PubMed

    Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F

    2016-10-15

    Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. An initial investigation into the validity of a computer-based auditory processing assessment (Feather Squadron).

    PubMed

    Barker, Matthew D; Purdy, Suzanne C

    2016-01-01

    This research investigates a novel method for identifying and measuring school-aged children with poor auditory processing through a tablet computer. Feasibility and test-retest reliability are investigated by examining the percentage of Group 1 participants able to complete the tasks and developmental effects on performance. Concurrent validity was investigated against traditional tests of auditory processing using Group 2. There were 847 students aged 5 to 13 years in group 1, and 46 aged 5 to 14 years in group 2. Some tasks could not be completed by the youngest participants. Significant correlations were found between results of most auditory processing areas assessed by the Feather Squadron test and traditional auditory processing tests. Test-retest comparisons indicated good reliability for most of the Feather Squadron assessments and some of the traditional tests. The results indicate the Feather Squadron assessment is a time-efficient, feasible, concurrently valid, and reliable approach for measuring auditory processing in school-aged children. Clinically, this may be a useful option for audiologists when performing auditory processing assessments as it is a relatively fast, engaging, and easy way to assess auditory processing abilities. Research is needed to investigate further the construct validity of this new assessment by examining the association between performance on Feather Squadron and objective evoked potential, lesion studies, and/or functional imaging measures of auditory function.

  16. Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI.

    PubMed

    Zhou, Sijie; Allison, Brendan Z; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing

    2016-01-01

    Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.

  17. Alterations in interhemispheric gamma-band connectivity are related to the emergence of auditory verbal hallucinations in healthy subjects during NMDA-receptor blockade.

    PubMed

    Thiebes, Stephanie; Steinmann, Saskia; Curic, Stjepan; Polomac, Nenad; Andreou, Christina; Eichler, Iris-Carola; Eichler, Lars; Zöllner, Christian; Gallinat, Jürgen; Leicht, Gregor; Mulert, Christoph

    2018-06-01

    Auditory verbal hallucinations (AVH) are a common positive symptom of schizophrenia. Excitatory-to-inhibitory (E/I) imbalance related to disturbed N-methyl-D-aspartate receptor (NMDAR) functioning has been suggested as a possible mechanism underlying altered connectivity and AVH in schizophrenia. The current study examined the effects of ketamine, a NMDAR antagonist, on glutamate-related mechanisms underlying interhemispheric gamma-band connectivity, conscious auditory perception during dichotic listening (DL), and the emergence of auditory verbal distortions and hallucinations (AVD/AVH) in healthy volunteers. In a single-blind, pseudo-randomized, placebo-controlled crossover design, nineteen male, right-handed volunteers were measured using 64 channel electroencephalography (EEG). Psychopathology was assessed with the PANSS interview and the 5D-ASC questionnaire, including a subscale to detect auditory alterations with regard to AVD/AVH (AUA-AVD/AVH). Interhemispheric connectivity analysis was performed using eLORETA source estimation and lagged phase synchronization (LPS) in the gamma-band range (30-100 Hz). Ketamine induced positive symptoms such as hallucinations in a subgroup of healthy subjects. In addition, interhemispheric gamma-band connectivity was found to be altered under ketamine compared to placebo, and subjects with AUA-AVD/AVH under ketamine showed significantly higher interhemispheric gamma-band connectivity than subjects without AUA-AVD/AVH. These findings demonstrate a relationship between NMDAR functioning, interhemispheric connectivity in the gamma-band frequency range between bilateral auditory cortices and the emergence of AVD/AVH in healthy subjects. The result is in accordance with the interhemispheric miscommunication hypothesis of AVH and argues for a possible role of glutamate in AVH in schizophrenia.

  18. Children's Auditory Working Memory Performance in Degraded Listening Conditions

    ERIC Educational Resources Information Center

    Osman, Homira; Sullivan, Jessica R.

    2014-01-01

    Purpose: The objectives of this study were to determine (a) whether school-age children with typical hearing demonstrate poorer auditory working memory performance in multitalker babble at degraded signal-to-noise ratios than in quiet; and (b) whether the amount of cognitive demand of the task contributed to differences in performance in noise. It…

  19. Learning Auditory Discrimination with Computer-Assisted Instruction: A Comparison of Two Different Performance Objectives.

    ERIC Educational Resources Information Center

    Steinhaus, Kurt A.

    A 12-week study of two groups of 14 college freshmen music majors was conducted to determine which group demonstrated greater achievement in learning auditory discrimination using computer-assisted instruction (CAI). The method employed was a pre-/post-test experimental design using subjects randomly assigned to a control group or an experimental…

  20. Relationship between Physiological and Parent-Observed Auditory Over-Responsiveness in Children with Typical Development and Those with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Takahashi, Hidetoshi; Nakahachi, Takayuki; Stickley, Andrew; Ishitobi, Makoto; Kamio, Yoko

    2018-01-01

    The objective of this study was to investigate relationships between caregiver-reported sensory processing abnormalities, and the physiological index of auditory over-responsiveness evaluated using acoustic startle response measures, in children with autism spectrum disorders and typical development. Mean acoustic startle response magnitudes in…

  1. Sustained selective attention to competing amplitude-modulations in human auditory cortex.

    PubMed

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.

  2. Sustained Selective Attention to Competing Amplitude-Modulations in Human Auditory Cortex

    PubMed Central

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control. PMID:25259525

  3. Effect of training and level of external auditory feedback on the singing voice: volume and quality

    PubMed Central

    Bottalico, Pasquale; Graetzer, Simone; Hunter, Eric J.

    2015-01-01

    Background Previous research suggests that classically trained professional singers rely not only on external auditory feedback but also on proprioceptive feedback associated with internal voice sensitivities. Objectives The Lombard Effect in singers and the relationship between Sound Pressure Level (SPL) and external auditory feedback was evaluated for professional and non-professional singers. Additionally, the relationship between voice quality, evaluated in terms of Singing Power Ratio (SPR), and external auditory feedback, level of accompaniment, voice register and singer gender was analyzed. Methods The subjects were 10 amateur or beginner singers, and 10 classically-trained professional or semi-professional singers (10 males and 10 females). Subjects sang an excerpt from the Star-spangled Banner with three different levels of the accompaniment, 70, 80 and 90 dBA, and with three different levels of external auditory feedback. SPL and the SPR were analyzed. Results The Lombard Effect was stronger for non-professional singers than professional singers. Higher levels of external auditory feedback were associated with a reduction in SPL. As predicted, the mean SPR was higher for professional than non-professional singers. Better voice quality was detected in the presence of higher levels of external auditory feedback. Conclusions With an increase in training, the singer’s reliance on external auditory feedback decreases. PMID:26186810

  4. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input

    PubMed Central

    Happel, Max F. K.; Ohl, Frank W.

    2017-01-01

    Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062

  5. Relationship between Auditory and Cognitive Abilities in Older Adults

    PubMed Central

    Sheft, Stanley

    2015-01-01

    Objective The objective was to evaluate the association of peripheral and central hearing abilities with cognitive function in older adults. Methods Recruited from epidemiological studies of aging and cognition at the Rush Alzheimer’s Disease Center, participants were a community-dwelling cohort of older adults (range 63–98 years) without diagnosis of dementia. The cohort contained roughly equal numbers of Black (n=61) and White (n=63) subjects with groups similar in terms of age, gender, and years of education. Auditory abilities were measured with pure-tone audiometry, speech-in-noise perception, and discrimination thresholds for both static and dynamic spectral patterns. Cognitive performance was evaluated with a 12-test battery assessing episodic, semantic, and working memory, perceptual speed, and visuospatial abilities. Results Among the auditory measures, only the static and dynamic spectral-pattern discrimination thresholds were associated with cognitive performance in a regression model that included the demographic covariates race, age, gender, and years of education. Subsequent analysis indicated substantial shared variance among the covariates race and both measures of spectral-pattern discrimination in accounting for cognitive performance. Among cognitive measures, working memory and visuospatial abilities showed the strongest interrelationship to spectral-pattern discrimination performance. Conclusions For a cohort of older adults without diagnosis of dementia, neither hearing thresholds nor speech-in-noise ability showed significant association with a summary measure of global cognition. In contrast, the two auditory metrics of spectral-pattern discrimination ability significantly contributed to a regression model prediction of cognitive performance, demonstrating association of central auditory ability to cognitive status using auditory metrics that avoided the confounding effect of speech materials. PMID:26237423

  6. Oscillographic and Physiological Measurements of Delayed Auditory Feedback and Anxiety Factors in Stutterers and Non-Stutterers.

    ERIC Educational Resources Information Center

    Timmons, Beverly A.; Boudreau, James P.

    Reported are five studies on the use of delayed auditory feedback (DAF) with stutterers. The first study indicates that sex differences and age differences in temporal reaction were found when subjects (5-, 7-, 9-, 11-, and 13-years-old) recited a nursery rhyme under DAF and NAF (normal auditory feedback) conditions. The second study is reported…

  7. [Assessment of the efficiency of the auditory training in children with dyslalia and auditory processing disorders].

    PubMed

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam

    2011-01-01

    To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.

  8. Association of auditory-verbal and visual hallucinations with impaired and improved recognition of colored pictures.

    PubMed

    Brébion, Gildas; Stephan-Otto, Christian; Usall, Judith; Huerta-Ramos, Elena; Perez del Olmo, Mireia; Cuevas-Esteban, Jorge; Haro, Josep Maria; Ochoa, Susana

    2015-09-01

    A number of cognitive underpinnings of auditory hallucinations have been established in schizophrenia patients, but few have, as yet, been uncovered for visual hallucinations. In previous research, we unexpectedly observed that auditory hallucinations were associated with poor recognition of color, but not black-and-white (b/w), pictures. In this study, we attempted to replicate and explain this finding. Potential associations with visual hallucinations were explored. B/w and color pictures were presented to 50 schizophrenia patients and 45 healthy individuals under 2 conditions of visual context presentation corresponding to 2 levels of visual encoding complexity. Then, participants had to recognize the target pictures among distractors. Auditory-verbal hallucinations were inversely associated with the recognition of the color pictures presented under the most effortful encoding condition. This association was fully mediated by working-memory span. Visual hallucinations were associated with improved recognition of the color pictures presented under the less effortful condition. Patients suffering from visual hallucinations were not impaired, relative to the healthy participants, in the recognition of these pictures. Decreased working-memory span in patients with auditory-verbal hallucinations might impede the effortful encoding of stimuli. Visual hallucinations might be associated with facilitation in the visual encoding of natural scenes, or with enhanced color perception abilities. (c) 2015 APA, all rights reserved).

  9. Peripheral auditory processing changes seasonally in Gambel’s white-crowned sparrow

    PubMed Central

    Caras, Melissa L.; Brenowitz, Eliot; Rubel, Edwin W

    2010-01-01

    Song in oscine birds is a learned behavior that plays important roles in breeding. Pronounced seasonal differences in song behavior, and in the morphology and physiology of the neural circuit underlying song production are well documented in many songbird species. Androgenic and estrogenic hormones largely mediate these seasonal changes. While much work has focused on the hormonal mechanisms underlying seasonal plasticity in songbird vocal production, relatively less work has investigated seasonal and hormonal effects on songbird auditory processing, particularly at a peripheral level. We addressed this issue in Gambel’s white-crowned sparrow (Zonotrichia leucophrys gambelii), a highly seasonal breeder. Photoperiod and hormone levels were manipulated in the laboratory to simulate natural breeding and non-breeding conditions. Peripheral auditory function was assessed by measuring the auditory brainstem response (ABR) and distortion product otoacoustic emissions (DPOAEs) of males and females in both conditions. Birds exposed to breeding-like conditions demonstrated elevated thresholds and prolonged peak latencies compared with birds housed under non-breeding-like conditions. There were no changes in DPOAEs, however, which indicates that the seasonal differences in ABRs do not arise from changes in hair cell function. These results suggest that seasons and hormones impact auditory processing as well as vocal production in wild songbirds. PMID:20563817

  10. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    PubMed

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  11. Neuronal plasticity and multisensory integration in filial imprinting.

    PubMed

    Town, Stephen Michael; McCabe, Brian John

    2011-03-10

    Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus.

  12. Neuronal Plasticity and Multisensory Integration in Filial Imprinting

    PubMed Central

    Town, Stephen Michael; McCabe, Brian John

    2011-01-01

    Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus. PMID:21423770

  13. Translational control of auditory imprinting and structural plasticity by eIF2α.

    PubMed

    Batista, Gervasio; Johnson, Jennifer Leigh; Dominguez, Elena; Costa-Mattioli, Mauro; Pena, Jose L

    2016-12-23

    The formation of imprinted memories during a critical period is crucial for vital behaviors, including filial attachment. Yet, little is known about the underlying molecular mechanisms. Using a combination of behavior, pharmacology, in vivo surface sensing of translation (SUnSET) and DiOlistic labeling we found that, translational control by the eukaryotic translation initiation factor 2 alpha (eIF2α) bidirectionally regulates auditory but not visual imprinting and related changes in structural plasticity in chickens. Increasing phosphorylation of eIF2α (p-eIF2α) reduces translation rates and spine plasticity, and selectively impairs auditory imprinting. By contrast, inhibition of an eIF2α kinase or blocking the translational program controlled by p-eIF2α enhances auditory imprinting. Importantly, these manipulations are able to reopen the critical period. Thus, we have identified a translational control mechanism that selectively underlies auditory imprinting. Restoring translational control of eIF2α holds the promise to rejuvenate adult brain plasticity and restore learning and memory in a variety of cognitive disorders.

  14. Neural Correlates of the Lombard Effect in Primate Auditory Cortex

    PubMed Central

    Eliades, Steven J.

    2012-01-01

    Speaking is a sensory-motor process that involves constant self-monitoring to ensure accurate vocal production. Self-monitoring of vocal feedback allows rapid adjustment to correct perceived differences between intended and produced vocalizations. One important behavior in vocal feedback control is a compensatory increase in vocal intensity in response to noise masking during vocal production, commonly referred to as the Lombard effect. This behavior requires mechanisms for continuously monitoring auditory feedback during speaking. However, the underlying neural mechanisms are poorly understood. Here we show that when marmoset monkeys vocalize in the presence of masking noise that disrupts vocal feedback, the compensatory increase in vocal intensity is accompanied by a shift in auditory cortex activity toward neural response patterns seen during vocalizations under normal feedback condition. Furthermore, we show that neural activity in auditory cortex during a vocalization phrase predicts vocal intensity compensation in subsequent phrases. These observations demonstrate that the auditory cortex participates in self-monitoring during the Lombard effect, and may play a role in the compensation of noise masking during feedback-mediated vocal control. PMID:22855821

  15. Rapid change in articulatory lip movement induced by preceding auditory feedback during production of bilabial plosives.

    PubMed

    Mochida, Takemi; Gomi, Hiroaki; Kashino, Makio

    2010-11-08

    There has been plentiful evidence of kinesthetically induced rapid compensation for unanticipated perturbation in speech articulatory movements. However, the role of auditory information in stabilizing articulation has been little studied except for the control of voice fundamental frequency, voice amplitude and vowel formant frequencies. Although the influence of auditory information on the articulatory control process is evident in unintended speech errors caused by delayed auditory feedback, the direct and immediate effect of auditory alteration on the movements of articulators has not been clarified. This work examined whether temporal changes in the auditory feedback of bilabial plosives immediately affects the subsequent lip movement. We conducted experiments with an auditory feedback alteration system that enabled us to replace or block speech sounds in real time. Participants were asked to produce the syllable /pa/ repeatedly at a constant rate. During the repetition, normal auditory feedback was interrupted, and one of three pre-recorded syllables /pa/, /Φa/, or /pi/, spoken by the same participant, was presented once at a different timing from the anticipated production onset, while no feedback was presented for subsequent repetitions. Comparisons of the labial distance trajectories under altered and normal feedback conditions indicated that the movement quickened during the short period immediately after the alteration onset, when /pa/ was presented 50 ms before the expected timing. Such change was not significant under other feedback conditions we tested. The earlier articulation rapidly induced by the progressive auditory input suggests that a compensatory mechanism helps to maintain a constant speech rate by detecting errors between the internally predicted and actually provided auditory information associated with self movement. The timing- and context-dependent effects of feedback alteration suggest that the sensory error detection works in a temporally asymmetric window where acoustic features of the syllable to be produced may be coded.

  16. The Effect of Noise on the Relationship Between Auditory Working Memory and Comprehension in School-Age Children.

    PubMed

    Sullivan, Jessica R; Osman, Homira; Schafer, Erin C

    2015-06-01

    The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Children with normal hearing between the ages of 8 and 10 years were administered working memory and comprehension tasks in quiet and noise. The comprehension measure comprised 5 domains: main idea, details, reasoning, vocabulary, and understanding messages. Performance on auditory working memory and comprehension tasks were significantly poorer in noise than in quiet. The reasoning, details, understanding, and vocabulary subtests were particularly affected in noise (p < .05). The relationship between auditory working memory and comprehension was stronger in noise than in quiet, suggesting an increased contribution of working memory. These data suggest that school-age children's auditory working memory and comprehension are negatively affected by noise. Performance on comprehension tasks in noise is strongly related to demands placed on working memory, supporting the theory that degrading listening conditions draws resources away from the primary task.

  17. Judging hardness of an object from the sounds of tapping created by a white cane.

    PubMed

    Nunokawa, K; Seki, Y; Ino, S; Doi, K

    2014-01-01

    The white cane plays a vital role in the independent mobility support of the visually impaired. Allowing the recognition of target attributes through the contact of a white cane is an important function. We have conducted research to obtain fundamental knowledge concerning the exploration methods used to perceive the hardness of an object through contact with a white cane. This research has allowed us to examine methods that enhance accuracy in the perception of objects as well as the materials and structures of a white cane. Previous research suggest considering the roles of both auditory and tactile information from the white cane in determining objects' hardness is necessary. This experimental study examined the ability of people to perceive the hardness of an object solely through the tapping sounds of a white cane (i.e., auditory information) using a method of magnitude estimation. Two types of sounds were used to estimate hardness: 1) the playback of recorded tapping sounds and 2) the sounds produced on-site by tapping. Three types of handgrips were used to create different sounds of tapping on an object with a cane. The participants of this experiment were five sighted university students wearing eye masks and two totally blind students who walk independently with a white cane. The results showed that both sighted university students and totally blind participants were able to accurately judge the hardness of an object solely by using auditory information from a white cane. For the blind participants, different handgrips significantly influenced the accuracy of their estimation of an object's hardness.

  18. Controlling the perceived distance of an auditory object by manipulation of loudspeaker directivity.

    PubMed

    Laitinen, Mikko-Ville; Politis, Archontis; Huhtakallio, Ilkka; Pulkki, Ville

    2015-06-01

    This work presents a method to control the perceived distance of an auditory object by changing the directivity pattern of a loudspeaker and consequently the direct-to-reverberant ratio at the listening spot. Control of the directivity pattern is achieved by beamforming using a compact multi-driver loudspeaker unit. A small-sized cubic array consisting of six drivers is assembled, and per driver beamforming filters are derived from directional measurements of the array. The proposed method is evaluated using formal listening tests. The results show that the perceived distance can be controlled effectively by directivity pattern modification.

  19. Effect of delayed auditory feedback on normal speakers at two speech rates

    NASA Astrophysics Data System (ADS)

    Stuart, Andrew; Kalinowski, Joseph; Rastatter, Michael P.; Lynch, Kerry

    2002-05-01

    This study investigated the effect of short and long auditory feedback delays at two speech rates with normal speakers. Seventeen participants spoke under delayed auditory feedback (DAF) at 0, 25, 50, and 200 ms at normal and fast rates of speech. Significantly two to three times more dysfluencies were displayed at 200 ms (p<0.05) relative to no delay or the shorter delays. There were significantly more dysfluencies observed at the fast rate of speech (p=0.028). These findings implicate the peripheral feedback system(s) of fluent speakers for the disruptive effects of DAF on normal speech production at long auditory feedback delays. Considering the contrast in fluency/dysfluency exhibited between normal speakers and those who stutter at short and long delays, it appears that speech disruption of normal speakers under DAF is a poor analog of stuttering.

  20. EXEL; Experience for Children in Learning. Parent-Directed Activities to Develop: Oral Expression, Visual Discrimination, Auditory Discrimination, Motor Coordination.

    ERIC Educational Resources Information Center

    Behrmann, Polly; Millman, Joan

    The activities collected in this handbook are planned for parents to use with their children in a learning experience. They can also be used in the classroom. Sections contain games designed to develop visual discrimination, auditory discrimination, motor coordination and oral expression. An objective is given for each game, and directions for…

  1. A Study of the Role of Central Auditory Processing in Learning Disabilities: A Prospectus Submitted to the Department of Speech.

    ERIC Educational Resources Information Center

    Murray, Hugh

    Proposed is a study to evaluate the auditory systems of learning disabled (LD) students with a new audiological, diagnostic, stimulus apparatus which is capable of objectively measuring the interaction of the binaural aspects of hearing. The author points out problems with LD definitions that exclude neurological disorders. The detection of…

  2. Children Use Object-Level Category Knowledge to Detect Changes in Complex Auditory Scenes

    ERIC Educational Resources Information Center

    Vanden Bosch der Nederlanden, Christina M.; Snyder, Joel S.; Hannon, Erin E.

    2016-01-01

    Children interact with and learn about all types of sound sources, including dogs, bells, trains, and human beings. Although it is clear that knowledge of semantic categories for everyday sights and sounds develops during childhood, there are very few studies examining how children use this knowledge to make sense of auditory scenes. We used a…

  3. Sensing in a noisy world: lessons from auditory specialists, echolocating bats.

    PubMed

    Corcoran, Aaron J; Moss, Cynthia F

    2017-12-15

    All animals face the essential task of extracting biologically meaningful sensory information from the 'noisy' backdrop of their environments. Here, we examine mechanisms used by echolocating bats to localize objects, track small prey and communicate in complex and noisy acoustic environments. Bats actively control and coordinate both the emission and reception of sound stimuli through integrated sensory and motor mechanisms that have evolved together over tens of millions of years. We discuss how bats behave in different ecological scenarios, including detecting and discriminating target echoes from background objects, minimizing acoustic interference from competing conspecifics and overcoming insect noise. Bats tackle these problems by deploying a remarkable array of auditory behaviors, sometimes in combination with the use of other senses. Behavioral strategies such as ceasing sonar call production and active jamming of the signals of competitors provide further insight into the capabilities and limitations of echolocation. We relate these findings to the broader topic of how animals extract relevant sensory information in noisy environments. While bats have highly refined abilities for operating under noisy conditions, they face the same challenges encountered by many other species. We propose that the specialized sensory mechanisms identified in bats are likely to occur in analogous systems across the animal kingdom. © 2017. Published by The Company of Biologists Ltd.

  4. Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task

    PubMed Central

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.

    2016-01-01

    Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829

  5. Auditory Alterations in Children Infected by Human Immunodeficiency Virus Verified Through Auditory Processing Test

    PubMed Central

    Romero, Ana Carla Leite; Alfaya, Lívia Marangoni; Gonçales, Alina Sanches; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima

    2016-01-01

    Introduction The auditory system of HIV-positive children may have deficits at various levels, such as the high incidence of problems in the middle ear that can cause hearing loss. Objective The objective of this study is to characterize the development of children infected by the Human Immunodeficiency Virus (HIV) in the Simplified Auditory Processing Test (SAPT) and the Staggered Spondaic Word Test. Methods We performed behavioral tests composed of the Simplified Auditory Processing Test and the Portuguese version of the Staggered Spondaic Word Test (SSW). The participants were 15 children infected by HIV, all using antiretroviral medication. Results The children had abnormal auditory processing verified by Simplified Auditory Processing Test and the Portuguese version of SSW. In the Simplified Auditory Processing Test, 60% of the children presented hearing impairment. In the SAPT, the memory test for verbal sounds showed more errors (53.33%); whereas in SSW, 86.67% of the children showed deficiencies indicating deficit in figure-ground, attention, and memory auditory skills. Furthermore, there are more errors in conditions of background noise in both age groups, where most errors were in the left ear in the Group of 8-year-olds, with similar results for the group aged 9 years. Conclusion The high incidence of hearing loss in children with HIV and comorbidity with several biological and environmental factors indicate the need for: 1) familiar and professional awareness of the impact on auditory alteration on the developing and learning of the children with HIV, and 2) access to educational plans and follow-up with multidisciplinary teams as early as possible to minimize the damage caused by auditory deficits. PMID:28050213

  6. Auditory perceptual simulation: Simulating speech rates or accents?

    PubMed

    Zhou, Peiyun; Christianson, Kiel

    2016-07-01

    When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. The auditory cortex hosts network nodes influential for emotion processing: An fMRI study on music-evoked fear and joy

    PubMed Central

    Skouras, Stavros; Lohmann, Gabriele

    2018-01-01

    Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with “small-world” properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex—and sensory systems in general—in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions. PMID:29385142

  8. The effects of divided attention on auditory priming.

    PubMed

    Mulligan, Neil W; Duke, Marquinn; Cooper, Angela W

    2007-09-01

    Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.

  9. Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.

    PubMed

    Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B

    2003-04-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.

  10. Sensorineural hearing loss amplifies neural coding of envelope information in the central auditory system of chinchillas

    PubMed Central

    Zhong, Ziwei; Henry, Kenneth S.; Heinz, Michael G.

    2014-01-01

    People with sensorineural hearing loss often have substantial difficulty understanding speech under challenging listening conditions. Behavioral studies suggest that reduced sensitivity to the temporal structure of sound may be responsible, but underlying neurophysiological pathologies are incompletely understood. Here, we investigate the effects of noise-induced hearing loss on coding of envelope (ENV) structure in the central auditory system of anesthetized chinchillas. ENV coding was evaluated noninvasively using auditory evoked potentials recorded from the scalp surface in response to sinusoidally amplitude modulated tones with carrier frequencies of 1, 2, 4, and 8 kHz and a modulation frequency of 140 Hz. Stimuli were presented in quiet and in three levels of white background noise. The latency of scalp-recorded ENV responses was consistent with generation in the auditory midbrain. Hearing loss amplified neural coding of ENV at carrier frequencies of 2 kHz and above. This result may reflect enhanced ENV coding from the periphery and/or an increase in the gain of central auditory neurons. In contrast to expectations, hearing loss was not associated with a stronger adverse effect of increasing masker intensity on ENV coding. The exaggerated neural representation of ENV information shown here at the level of the auditory midbrain helps to explain previous findings of enhanced sensitivity to amplitude modulation in people with hearing loss under some conditions. Furthermore, amplified ENV coding may potentially contribute to speech perception problems in people with cochlear hearing loss by acting as a distraction from more salient acoustic cues, particularly in fluctuating backgrounds. PMID:24315815

  11. Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants

    PubMed Central

    Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.

    2012-01-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380

  12. Sensitivity and specificity of auditory steady‐state response testing

    PubMed Central

    Rabelo, Camila Maia; Schochat, Eliane

    2011-01-01

    INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady‐state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady‐state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz). The difference between auditory steady‐state response‐estimated thresholds and behavioral thresholds (audiometric evaluation) was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady‐state response‐estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS) lesions has shown that individuals with CANS lesions present a greater difference between ASSR‐estimated thresholds and actual behavioral thresholds; ASSR‐estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR‐estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady‐state response testing was lower than its overall specificity. Although the overall specificity was high, it was lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. Overall sensitivity was also lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. PMID:21437442

  13. Auditory priming improves neural synchronization in auditory-motor entrainment.

    PubMed

    Crasta, Jewel E; Thaut, Michael H; Anderson, Charles W; Davies, Patricia L; Gavin, William J

    2018-05-22

    Neurophysiological research has shown that auditory and motor systems interact during movement to rhythmic auditory stimuli through a process called entrainment. This study explores the neural oscillations underlying auditory-motor entrainment using electroencephalography. Forty young adults were randomly assigned to one of two control conditions, an auditory-only condition or a motor-only condition, prior to a rhythmic auditory-motor synchronization condition (referred to as combined condition). Participants assigned to the auditory-only condition auditory-first group) listened to 400 trials of auditory stimuli presented every 800 ms, while those in the motor-only condition (motor-first group) were asked to tap rhythmically every 800 ms without any external stimuli. Following their control condition, all participants completed an auditory-motor combined condition that required tapping along with auditory stimuli every 800 ms. As expected, the neural processes for the combined condition for each group were different compared to their respective control condition. Time-frequency analysis of total power at an electrode site on the left central scalp (C3) indicated that the neural oscillations elicited by auditory stimuli, especially in the beta and gamma range, drove the auditory-motor entrainment. For the combined condition, the auditory-first group had significantly lower evoked power for a region of interest representing sensorimotor processing (4-20 Hz) and less total power in a region associated with anticipation and predictive timing (13-16 Hz) than the motor-first group. Thus, the auditory-only condition served as a priming facilitator of the neural processes in the combined condition, more so than the motor-only condition. Results suggest that even brief periods of rhythmic training of the auditory system leads to neural efficiency facilitating the motor system during the process of entrainment. These findings have implications for interventions using rhythmic auditory stimulation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. The singular nature of auditory and visual scene analysis in autism

    PubMed Central

    Lin, I.-Fan; Shirama, Aya; Kato, Nobumasa

    2017-01-01

    Individuals with autism spectrum disorder often have difficulty acquiring relevant auditory and visual information in daily environments, despite not being diagnosed as hearing impaired or having low vision. Resent psychophysical and neurophysiological studies have shown that autistic individuals have highly specific individual differences at various levels of information processing, including feature extraction, automatic grouping and top-down modulation in auditory and visual scene analysis. Comparison of the characteristics of scene analysis between auditory and visual modalities reveals some essential commonalities, which could provide clues about the underlying neural mechanisms. Further progress in this line of research may suggest effective methods for diagnosing and supporting autistic individuals. This article is part of the themed issue ‘Auditory and visual scene analysis'. PMID:28044025

  15. The role of temporal structure in the investigation of sensory memory, auditory scene analysis, and speech perception: a healthy-aging perspective.

    PubMed

    Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David

    2015-02-01

    Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals--over a range of time scales from milliseconds to seconds--renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. The role of temporal structure in the investigation of sensory memory, auditory scene analysis, and speech perception: A healthy-aging perspective

    PubMed Central

    Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David

    2014-01-01

    Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals - over a range of time scales from milliseconds to seconds - renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own ‚privileged‘ temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. PMID:24956028

  17. Short-wavelength infrared laser activates the auditory neurons: comparing the effect of 980 vs. 810 nm wavelength.

    PubMed

    Tian, Lan; Wang, Jingxuan; Wei, Ying; Lu, Jianren; Xu, Anting; Xia, Ming

    2017-02-01

    Research on auditory neural triggering by optical stimulus has been developed as an emerging technique to elicit the auditory neural response, which may provide an alternative method to the cochlear implants. However, most previous studies have been focused on using longer-wavelength near-infrared (>1800 nm) laser. The effect comparison of different laser wavelengths in short-wavelength infrared (SWIR) range on the auditory neural stimulation has not been previously explored. In this study, the pulsed 980- and 810-nm SWIR lasers were applied as optical stimuli to irradiate the auditory neurons in the cochlea of five deafened guinea pigs and the neural response under the two laser wavelengths was compared by recording the evoked optical auditory brainstem responses (OABRs). In addition, the effect of radiant exposure, laser pulse width, and threshold with the two laser wavelengths was further investigated and compared. The one-way analysis of variance (ANOVA) was used to analyze those data. Results showed that the OABR amplitude with the 980-nm laser is higher than the amplitude with the 810-nm laser under the same radiant exposure from 10 to 102 mJ/cm 2 . And the laser stimulation of 980 nm wavelength has lower threshold radiant exposure than the 810 nm wavelength at varied pulse duration in 20-500 μs range. Moreover, the 810-nm laser has a wider optimized pulse duration range than the 980-nm laser for the auditory neural stimulation.

  18. Theoretical Limitations on Functional Imaging Resolution in Auditory Cortex

    PubMed Central

    Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.

    2010-01-01

    Functional imaging can reveal detailed organizational structure in cerebral cortical areas, but neuronal response features and local neural interconnectivity can influence the resulting images, possibly limiting the inferences that can be drawn about neural function. Discerning the fundamental principles of organizational structure in the auditory cortex of multiple species has been somewhat challenging historically both with functional imaging and with electrophysiology. A possible limitation affecting any methodology using pooled neuronal measures may be the relative distribution of response selectivity throughout the population of auditory cortex neurons. One neuronal response type inherited from the cochlea, for example, exhibits a receptive field that increases in size (i.e., decreases in selectivity) at higher stimulus intensities. Even though these neurons appear to represent a minority of auditory cortex neurons, they are likely to contribute disproportionately to the activity detected in functional images, especially if intense sounds are used for stimulation. To evaluate the potential influence of neuronal subpopulations upon functional images of primary auditory cortex, a model array representing cortical neurons was probed with virtual imaging experiments under various assumptions about the local circuit organization. As expected, different neuronal subpopulations were activated preferentially under different stimulus conditions. In fact, stimulus protocols that can preferentially excite selective neurons, resulting in a relatively sparse activation map, have the potential to improve the effective resolution of functional auditory cortical images. These experimental results also make predictions about auditory cortex organization that can be tested with refined functional imaging experiments. PMID:20079343

  19. The Relationship between Types of Attention and Auditory Processing Skills: Reconsidering Auditory Processing Disorder Diagnosis

    PubMed Central

    Stavrinos, Georgios; Iliadou, Vassiliki-Maria; Edwards, Lindsey; Sirimanna, Tony; Bamiou, Doris-Eva

    2018-01-01

    Measures of attention have been found to correlate with specific auditory processing tests in samples of children suspected of Auditory Processing Disorder (APD), but these relationships have not been adequately investigated. Despite evidence linking auditory attention and deficits/symptoms of APD, measures of attention are not routinely used in APD diagnostic protocols. The aim of the study was to examine the relationship between auditory and visual attention tests and auditory processing tests in children with APD and to assess whether a proposed diagnostic protocol for APD, including measures of attention, could provide useful information for APD management. A pilot study including 27 children, aged 7–11 years, referred for APD assessment was conducted. The validated test of everyday attention for children, with visual and auditory attention tasks, the listening in spatialized noise sentences test, the children's communication checklist questionnaire and tests from a standard APD diagnostic test battery were administered. Pearson's partial correlation analysis examining the relationship between these tests and Cochrane's Q test analysis comparing proportions of diagnosis under each proposed battery were conducted. Divided auditory and divided auditory-visual attention strongly correlated with the dichotic digits test, r = 0.68, p < 0.05, and r = 0.76, p = 0.01, respectively, in a sample of 20 children with APD diagnosis. The standard APD battery identified a larger proportion of participants as having APD, than an attention battery identified as having Attention Deficits (ADs). The proposed APD battery excluding AD cases did not have a significantly different diagnosis proportion than the standard APD battery. Finally, the newly proposed diagnostic battery, identifying an inattentive subtype of APD, identified five children who would have otherwise been considered not having ADs. The findings show that a subgroup of children with APD demonstrates underlying sustained and divided attention deficits. Attention deficits in children with APD appear to be centred around the auditory modality but further examination of types of attention in both modalities is required. Revising diagnostic criteria to incorporate attention tests and the inattentive type of APD in the test battery, provides additional useful data to clinicians to ensure careful interpretation of APD assessments. PMID:29441033

  20. Spoken Word Recognition in Toddlers Who Use Cochlear Implants

    PubMed Central

    Grieco-Calub, Tina M.; Saffran, Jenny R.; Litovsky, Ruth Y.

    2010-01-01

    Purpose The purpose of this study was to assess the time course of spoken word recognition in 2-year-old children who use cochlear implants (CIs) in quiet and in the presence of speech competitors. Method Children who use CIs and age-matched peers with normal acoustic hearing listened to familiar auditory labels, in quiet or in the presence of speech competitors, while their eye movements to target objects were digitally recorded. Word recognition performance was quantified by measuring each child’s reaction time (i.e., the latency between the spoken auditory label and the first look at the target object) and accuracy (i.e., the amount of time that children looked at target objects within 367 ms to 2,000 ms after the label onset). Results Children with CIs were less accurate and took longer to fixate target objects than did age-matched children without hearing loss. Both groups of children showed reduced performance in the presence of the speech competitors, although many children continued to recognize labels at above-chance levels. Conclusion The results suggest that the unique auditory experience of young CI users slows the time course of spoken word recognition abilities. In addition, real-world listening environments may slow language processing in young language learners, regardless of their hearing status. PMID:19951921

  1. Behavioral and electrophysiological auditory processing measures in traumatic brain injury after acoustically controlled auditory training: a long-term study

    PubMed Central

    Figueiredo, Carolina Calsolari; de Andrade, Adriana Neves; Marangoni-Castan, Andréa Tortosa; Gil, Daniela; Suriano, Italo Capraro

    2015-01-01

    ABSTRACT Objective To investigate the long-term efficacy of acoustically controlled auditory training in adults after tarumatic brain injury. Methods A total of six audioogically normal individuals aged between 20 and 37 years were studied. They suffered severe traumatic brain injury with diffuse axional lesion and underwent an acoustically controlled auditory training program approximately one year before. The results obtained in the behavioral and electrophysiological evaluation of auditory processing immediately after acoustically controlled auditory training were compared to reassessment findings, one year later. Results Quantitative analysis of auditory brainsteim response showed increased absolute latency of all waves and interpeak intervals, bilaterraly, when comparing both evaluations. Moreover, increased amplitude of all waves, and the wave V amplitude was statistically significant for the right ear, and wave III for the left ear. As to P3, decreased latency and increased amplitude were found for both ears in reassessment. The previous and current behavioral assessment showed similar results, except for the staggered spondaic words in the left ear and the amount of errors on the dichotic consonant-vowel test. Conclusion The acoustically controlled auditory training was effective in the long run, since better latency and amplitude results were observed in the electrophysiological evaluation, in addition to stability of behavioral measures after one-year training. PMID:26676270

  2. Using Neuroplasticity-Based Auditory Training to Improve Verbal Memory in Schizophrenia

    PubMed Central

    Fisher, Melissa; Holland, Christine; Merzenich, Michael M.; Vinogradov, Sophia

    2009-01-01

    Objective Impaired verbal memory in schizophrenia is a key rate-limiting factor for functional outcome, does not respond to currently available medications, and shows only modest improvement after conventional behavioral remediation. The authors investigated an innovative approach to the remediation of verbal memory in schizophrenia, based on principles derived from the basic neuroscience of learning-induced neuroplasticity. The authors report interim findings in this ongoing study. Method Fifty-five clinically stable schizophrenia subjects were randomly assigned to either 50 hours of computerized auditory training or a control condition using computer games. Those receiving auditory training engaged in daily computerized exercises that placed implicit, increasing demands on auditory perception through progressively more difficult auditory-verbal working memory and verbal learning tasks. Results Relative to the control group, subjects who received active training showed significant gains in global cognition, verbal working memory, and verbal learning and memory. They also showed reliable and significant improvement in auditory psychophysical performance; this improvement was significantly correlated with gains in verbal working memory and global cognition. Conclusions Intensive training in early auditory processes and auditory-verbal learning results in substantial gains in verbal cognitive processes relevant to psychosocial functioning in schizophrenia. These gains may be due to a training method that addresses the early perceptual impairments in the illness, that exploits intact mechanisms of repetitive practice in schizophrenia, and that uses an intensive, adaptive training approach. PMID:19448187

  3. Auditory Speech Perception Development in Relation to Patient's Age with Cochlear Implant

    PubMed Central

    Ciscare, Grace Kelly Seixas; Mantello, Erika Barioni; Fortunato-Queiroz, Carla Aparecida Urzedo; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa dos

    2017-01-01

    Introduction  A cochlear implant in adolescent patients with pre-lingual deafness is still a debatable issue. Objective  The objective of this study is to analyze and compare the development of auditory speech perception in children with pre-lingual auditory impairment submitted to cochlear implant, in different age groups in the first year after implantation. Method  This is a retrospective study, documentary research, in which we analyzed 78 reports of children with severe bilateral sensorineural hearing loss, unilateral cochlear implant users of both sexes. They were divided into three groups: G1, 22 infants aged less than 42 months; G2, 28 infants aged between 43 to 83 months; and G3, 28 older than 84 months. We collected medical record data to characterize the patients, auditory thresholds with cochlear implants, assessment of speech perception, and auditory skills. Results  There was no statistical difference in the association of the results among groups G1, G2, and G3 with sex, caregiver education level, city of residence, and speech perception level. There was a moderate correlation between age and hearing aid use time, age and cochlear implants use time. There was a strong correlation between age and the age cochlear implants was performed, hearing aid use time and age CI was performed. Conclusion  There was no statistical difference in the speech perception in relation to the patient's age when cochlear implant was performed. There were statistically significant differences for the variables of auditory deprivation time between G3 - G1 and G2 - G1 and hearing aid use time between G3 - G2 and G3 - G1. PMID:28680487

  4. Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration.

    PubMed

    Wahn, Basil; König, Peter

    2015-01-01

    Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.

  5. Using an auditory sensory substitution device to augment vision: evidence from eye movements.

    PubMed

    Wright, Thomas D; Margolis, Aaron; Ward, Jamie

    2015-03-01

    Sensory substitution devices convert information normally associated with one sense into another sense (e.g. converting vision into sound). This is often done to compensate for an impaired sense. The present research uses a multimodal approach in which both natural vision and sound-from-vision ('soundscapes') are simultaneously presented. Although there is a systematic correspondence between what is seen and what is heard, we introduce a local discrepancy between the signals (the presence of a target object that is heard but not seen) that the participant is required to locate. In addition to behavioural responses, the participants' gaze is monitored with eye-tracking. Although the target object is only presented in the auditory channel, behavioural performance is enhanced when visual information relating to the non-target background is presented. In this instance, vision may be used to generate predictions about the soundscape that enhances the ability to detect the hidden auditory object. The eye-tracking data reveal that participants look for longer in the quadrant containing the auditory target even when they subsequently judge it to be located elsewhere. As such, eye movements generated by soundscapes reveal the knowledge of the target location that does not necessarily correspond to the actual judgment made. The results provide a proof of principle that multimodal sensory substitution may be of benefit to visually impaired people with some residual vision and, in normally sighted participants, for guiding search within complex scenes.

  6. Depressive and Anxiety Symptoms in Older Adults With Auditory, Vision, and Dual Sensory Impairment.

    PubMed

    Simning, Adam; Fox, Meghan L; Barnett, Steven L; Sorensen, Silvia; Conwell, Yeates

    2018-06-01

    The objective of the study is to examine the association of auditory, vision, and dual sensory impairment with late-life depressive and anxiety symptoms. Our study included 7,507 older adults from the National Health & Aging Trends Study, a nationally representative sample of U.S. Medicare beneficiaries. Auditory and vision impairment were determined by self-report, and depressive and anxiety symptoms were evaluated by the two-item Patient Health Questionnaire (PHQ-2) and two-item Generalized Anxiety Disorder Scale (GAD-2), respectively. Auditory, vision, and dual impairment were associated with an increased risk of depressive and anxiety symptoms in multivariable analyses accounting for sociodemographics, medical comorbidity, and functional impairment. Auditory, vision, and dual impairment were also associated with an increased risk for depressive and anxiety symptoms that persist or were of new onset after 1 year. Screening older adults with sensory impairments for depression and anxiety, and screening those with late-life depression and anxiety for sensory impairments, may identify treatment opportunities to optimize health and well-being.

  7. Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex.

    PubMed

    Sugihara, Tadashi; Diltz, Mark D; Averbeck, Bruno B; Romanski, Lizabeth M

    2006-10-25

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O'Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication.

  8. Integration of Auditory and Visual Communication Information in the Primate Ventrolateral Prefrontal Cortex

    PubMed Central

    Sugihara, Tadashi; Diltz, Mark D.; Averbeck, Bruno B.; Romanski, Lizabeth M.

    2009-01-01

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O’Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication. PMID:17065454

  9. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    PubMed

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  10. Auditory Confrontation Naming in Alzheimer’s Disease

    PubMed Central

    Brandt, Jason; Bakker, Arnold; Maroof, David Aaron

    2010-01-01

    Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer’s disease (AD). We developed an Auditory Naming Task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies, mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the Auditory Naming Task. This task was also more difficult than two versions of a comparable Visual Naming Task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal subjects. Nonetheless, our Auditory Naming Test may prove useful in research and clinical practice, especially with visually-impaired patients. PMID:20981630

  11. Tailoring auditory training to patient needs with single and multiple talkers: transfer-appropriate gains on a four-choice discrimination test.

    PubMed

    Barcroft, Joe; Sommers, Mitchell S; Tye-Murray, Nancy; Mauzé, Elizabeth; Schroy, Catherine; Spehar, Brent

    2011-11-01

    Our long-term objective is to develop an auditory training program that will enhance speech recognition in those situations where patients most want improvement. As a first step, the current investigation trained participants using either a single talker or multiple talkers to determine if auditory training leads to transfer-appropriate gains. The experiment implemented a 2 × 2 × 2 mixed design, with training condition as a between-participants variable and testing interval and test version as repeated-measures variables. Participants completed a computerized six-week auditory training program wherein they heard either the speech of a single talker or the speech of six talkers. Training gains were assessed with single-talker and multi-talker versions of the Four-choice discrimination test. Participants in both groups were tested on both versions. Sixty-nine adult hearing-aid users were randomly assigned to either single-talker or multi-talker auditory training. Both groups showed significant gains on both test versions. Participants who trained with multiple talkers showed greater improvement on the multi-talker version whereas participants who trained with a single talker showed greater improvement on the single-talker version. Transfer-appropriate gains occurred following auditory training, suggesting that auditory training can be designed to target specific patient needs.

  12. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    PubMed

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  13. Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes

    PubMed Central

    Lewis, James W.; Talkington, William J.; Tallaksen, Katherine C.; Frum, Chris A.

    2012-01-01

    Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds. PMID:22582038

  14. Structured Activities in Perceptual Training to Aid Retention of Visual and Auditory Images.

    ERIC Educational Resources Information Center

    Graves, James W.; And Others

    The experimental program in structured activities in perceptual training was said to have two main objectives: to train children in retention of visual and auditory images and to increase the children's motivation to learn. Eight boys and girls participated in the program for two hours daily for a 10-week period. The age range was 7.0 to 12.10…

  15. Investigating Verbal and Visual Auditory Learning After Conformal Radiation Therapy for Childhood Ependymoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Pinto, Marcos; Conklin, Heather M.; Li Chenghong

    Purpose: The primary objective of this study was to determine whether children with localized ependymoma experience a decline in verbal or visual-auditory learning after conformal radiation therapy (CRT). The secondary objective was to investigate the impact of age and select clinical factors on learning before and after treatment. Methods and Materials: Learning in a sample of 71 patients with localized ependymoma was assessed with the California Verbal Learning Test (CVLT-C) and the Visual-Auditory Learning Test (VAL). Learning measures were administered before CRT, at 6 months, and then yearly for a total of 5 years. Results: There was no significant declinemore » on measures of verbal or visual-auditory learning after CRT; however, younger age, more surgeries, and cerebrospinal fluid shunting did predict lower scores at baseline. There were significant longitudinal effects (improved learning scores after treatment) among older children on the CVLT-C and children that did not receive pre-CRT chemotherapy on the VAL. Conclusion: There was no evidence of global decline in learning after CRT in children with localized ependymoma. Several important implications from the findings include the following: (1) identification of and differentiation among variables with transient vs. long-term effects on learning, (2) demonstration that children treated with chemotherapy before CRT had greater risk of adverse visual-auditory learning performance, and (3) establishment of baseline and serial assessment as critical in ascertaining necessary sensitivity and specificity for the detection of modest effects.« less

  16. Longitudinal Comparison of Auditory Steady-State Evoked Potentials in Preterm and Term Infants: The Maturation Process

    PubMed Central

    Sousa, Ana Constantino; Didoné, Dayane Domeneghini; Sleifer, Pricila

    2017-01-01

    Introduction  Preterm neonates are at risk of changes in their auditory system development, which explains the need for auditory monitoring of this population. The Auditory Steady-State Response (ASSR) is an objective method that allows obtaining the electrophysiological thresholds with greater applicability in neonatal and pediatric population. Objective  The purpose of this study is to compare the ASSR thresholds in preterm and term infants evaluated during two stages. Method  The study included 63 normal hearing neonates: 33 preterm and 30 term. They underwent assessment of ASSR in both ears simultaneously through insert phones in the frequencies of 500 to 4000Hz with the amplitude modulated from 77 to 103Hz. We presented the intensity at a decreasing level to detect the minimum level of responses. At 18 months, 26 of 33 preterm infants returned for the new assessment for ASSR and were compared with 30 full-term infants. We compared between groups according to gestational age. Results  Electrophysiological thresholds were higher in preterm than in full-term neonates ( p  < 0.05) at the first testing. There were no significant differences between ears and gender. At 18 months, there was no difference between groups ( p  > 0.05) in all the variables described. Conclusion  In the first evaluation preterm had higher thresholds in ASSR. There was no difference at 18 months of age, showing the auditory maturation of preterm infants throughout their development. PMID:28680486

  17. Mismatch negativity (MMN) reveals inefficient auditory ventral stream function in chronic auditory comprehension impairments.

    PubMed

    Robson, Holly; Cloutman, Lauren; Keidel, James L; Sage, Karen; Drakesmith, Mark; Welbourne, Stephen

    2014-10-01

    Auditory discrimination is significantly impaired in Wernicke's aphasia (WA) and thought to be causatively related to the language comprehension impairment which characterises the condition. This study used mismatch negativity (MMN) to investigate the neural responses corresponding to successful and impaired auditory discrimination in WA. Behavioural auditory discrimination thresholds of consonant-vowel-consonant (CVC) syllables and pure tones (PTs) were measured in WA (n = 7) and control (n = 7) participants. Threshold results were used to develop multiple deviant MMN oddball paradigms containing deviants which were either perceptibly or non-perceptibly different from the standard stimuli. MMN analysis investigated differences associated with group, condition and perceptibility as well as the relationship between MMN responses and comprehension (within which behavioural auditory discrimination profiles were examined). MMN waveforms were observable to both perceptible and non-perceptible auditory changes. Perceptibility was only distinguished by MMN amplitude in the PT condition. The WA group could be distinguished from controls by an increase in MMN response latency to CVC stimuli change. Correlation analyses displayed a relationship between behavioural CVC discrimination and MMN amplitude in the control group, where greater amplitude corresponded to better discrimination. The WA group displayed the inverse effect; both discrimination accuracy and auditory comprehension scores were reduced with increased MMN amplitude. In the WA group, a further correlation was observed between the lateralisation of MMN response and CVC discrimination accuracy; the greater the bilateral involvement the better the discrimination accuracy. The results from this study provide further evidence for the nature of auditory comprehension impairment in WA and indicate that the auditory discrimination deficit is grounded in a reduced ability to engage in efficient hierarchical processing and the construction of invariant auditory objects. Correlation results suggest that people with chronic WA may rely on an inefficient, noisy right hemisphere auditory stream when attempting to process speech stimuli.

  18. Cooperative dynamics in auditory brain response

    NASA Astrophysics Data System (ADS)

    Kwapień, J.; DrożdŻ, S.; Liu, L. C.; Ioannides, A. A.

    1998-11-01

    Simultaneous estimates of activity in the left and right auditory cortex of five normal human subjects were extracted from multichannel magnetoencephalography recordings. Left, right, and binaural stimulations were used, in separate runs, for each subject. The resulting time series of left and right auditory cortex activity were analyzed using the concept of mutual information. The analysis constitutes an objective method to address the nature of interhemispheric correlations in response to auditory stimulations. The results provide clear evidence of the occurrence of such correlations mediated by a direct information transport, with clear laterality effects: as a rule, the contralateral hemisphere leads by 10-20 ms, as can be seen in the average signal. The strength of the interhemispheric coupling, which cannot be extracted from the average data, is found to be highly variable from subject to subject, but remarkably stable for each subject.

  19. Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex

    PubMed Central

    Romanski, L. M.; Tian, B.; Fritz, J.; Mishkin, M.; Goldman-Rakic, P. S.; Rauschecker, J. P.

    2009-01-01

    ‘What’ and ‘where’ visual streams define ventrolateral object and dorsolateral spatial processing domains in the prefrontal cortex of nonhuman primates. We looked for similar streams for auditory–prefrontal connections in rhesus macaques by combining microelectrode recording with anatomical tract-tracing. Injection of multiple tracers into physiologically mapped regions AL, ML and CL of the auditory belt cortex revealed that anterior belt cortex was reciprocally connected with the frontal pole (area 10), rostral principal sulcus (area 46) and ventral prefrontal regions (areas 12 and 45), whereas the caudal belt was mainly connected with the caudal principal sulcus (area 46) and frontal eye fields (area 8a). Thus separate auditory streams originate in caudal and rostral auditory cortex and target spatial and non-spatial domains of the frontal lobe, respectively. PMID:10570492

  20. Auditory and Cognitive Factors Associated with Speech-in-Noise Complaints following Mild Traumatic Brain Injury.

    PubMed

    Hoover, Eric C; Souza, Pamela E; Gallun, Frederick J

    2017-04-01

    Auditory complaints following mild traumatic brain injury (MTBI) are common, but few studies have addressed the role of auditory temporal processing in speech recognition complaints. In this study, deficits understanding speech in a background of speech noise following MTBI were evaluated with the goal of comparing the relative contributions of auditory and nonauditory factors. A matched-groups design was used in which a group of listeners with a history of MTBI were compared to a group matched in age and pure-tone thresholds, as well as a control group of young listeners with normal hearing (YNH). Of the 33 listeners who participated in the study, 13 were included in the MTBI group (mean age = 46.7 yr), 11 in the Matched group (mean age = 49 yr), and 9 in the YNH group (mean age = 20.8 yr). Speech-in-noise deficits were evaluated using subjective measures as well as monaural word (Words-in-Noise test) and sentence (Quick Speech-in-Noise test) tasks, and a binaural spatial release task. Performance on these measures was compared to psychophysical tasks that evaluate monaural and binaural temporal fine-structure tasks and spectral resolution. Cognitive measures of attention, processing speed, and working memory were evaluated as possible causes of differences between MTBI and Matched groups that might contribute to speech-in-noise perception deficits. A high proportion of listeners in the MTBI group reported difficulty understanding speech in noise (84%) compared to the Matched group (9.1%), and listeners who reported difficulty were more likely to have abnormal results on objective measures of speech in noise. No significant group differences were found between the MTBI and Matched listeners on any of the measures reported, but the number of abnormal tests differed across groups. Regression analysis revealed that a combination of auditory and auditory processing factors contributed to monaural speech-in-noise scores, but the benefit of spatial separation was related to a combination of working memory and peripheral auditory factors across all listeners in the study. The results of this study are consistent with previous findings that a subset of listeners with MTBI has objective auditory deficits. Speech-in-noise performance was related to a combination of auditory and nonauditory factors, confirming the important role of audiology in MTBI rehabilitation. Further research is needed to evaluate the prevalence and causal relationship of auditory deficits following MTBI. American Academy of Audiology

  1. Neurophysiological mechanisms of cortical plasticity impairments in schizophrenia and modulation by the NMDA receptor agonist D-serine.

    PubMed

    Kantrowitz, Joshua T; Epstein, Michael L; Beggel, Odeta; Rohrig, Stephanie; Lehrfeld, Jonathan M; Revheim, Nadine; Lehrfeld, Nayla P; Reep, Jacob; Parker, Emily; Silipo, Gail; Ahissar, Merav; Javitt, Daniel C

    2016-12-01

    Schizophrenia is associated with deficits in cortical plasticity that affect sensory brain regions and lead to impaired cognitive performance. Here we examined underlying neural mechanisms of auditory plasticity deficits using combined behavioural and neurophysiological assessment, along with neuropharmacological manipulation targeted at the N-methyl-D-aspartate type glutamate receptor (NMDAR). Cortical plasticity was assessed in a cohort of 40 schizophrenia/schizoaffective patients relative to 42 healthy control subjects using a fixed reference tone auditory plasticity task. In a second cohort (n = 21 schizophrenia/schizoaffective patients, n = 13 healthy controls), event-related potential and event-related time-frequency measures of auditory dysfunction were assessed during administration of the NMDAR agonist d-serine. Mismatch negativity was used as a functional read-out of auditory-level function. Clinical trials registration numbers were NCT01474395/NCT02156908 Schizophrenia/schizoaffective patients showed significantly reduced auditory plasticity versus healthy controls (P = 0.001) that correlated with measures of cognitive, occupational and social dysfunction. In event-related potential/time-frequency analyses, patients showed highly significant reductions in sensory N1 that reflected underlying impairments in θ responses (P < 0.001), along with reduced θ and β-power modulation during retention and motor-preparation intervals. Repeated administration of d-serine led to intercorrelated improvements in (i) auditory plasticity (P < 0.001); (ii) θ-frequency response (P < 0.05); and (iii) mismatch negativity generation to trained versus untrained tones (P = 0.02). Schizophrenia/schizoaffective patients show highly significant deficits in auditory plasticity that contribute to cognitive, occupational and social dysfunction. d-serine studies suggest first that NMDAR dysfunction may contribute to underlying cortical plasticity deficits and, second, that repeated NMDAR agonist administration may enhance cortical plasticity in schizophrenia. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Efficacy of Individual Computer-Based Auditory Training for People with Hearing Loss: A Systematic Review of the Evidence

    PubMed Central

    Henshaw, Helen; Ferguson, Melanie A.

    2013-01-01

    Background Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss. Objective This systematic review (PROSPERO 2011: CRD42011001406) evaluated the published evidence-base for the efficacy of individual computer-based auditory training to improve speech intelligibility, cognition and communication abilities in adults with hearing loss, with or without hearing aids or cochlear implants. Methods A systematic search of eight databases and key journals identified 229 articles published since 1996, 13 of which met the inclusion criteria. Data were independently extracted and reviewed by the two authors. Study quality was assessed using ten pre-defined scientific and intervention-specific measures. Results Auditory training resulted in improved performance for trained tasks in 9/10 articles that reported on-task outcomes. Although significant generalisation of learning was shown to untrained measures of speech intelligibility (11/13 articles), cognition (1/1 articles) and self-reported hearing abilities (1/2 articles), improvements were small and not robust. Where reported, compliance with computer-based auditory training was high, and retention of learning was shown at post-training follow-ups. Published evidence was of very-low to moderate study quality. Conclusions Our findings demonstrate that published evidence for the efficacy of individual computer-based auditory training for adults with hearing loss is not robust and therefore cannot be reliably used to guide intervention at this time. We identify a need for high-quality evidence to further examine the efficacy of computer-based auditory training for people with hearing loss. PMID:23675431

  3. Auditory fatigue : influence of mental factors.

    DOT National Transportation Integrated Search

    1965-01-01

    Conflicting reports regarding the influence of mental tasks on auditory fatigue have recently appeared in the literature. In the present study, 10 male subjects were exposed to 4000 cps fatigue toe at 40 dB SL for 3 min under conditions of mental ari...

  4. Emerging technologies with potential for objectively evaluating speech recognition skills.

    PubMed

    Rawool, Vishakha Waman

    2016-01-01

    Work-related exposure to noise and other ototoxins can cause damage to the cochlea, synapses between the inner hair cells, the auditory nerve fibers, and higher auditory pathways, leading to difficulties in recognizing speech. Procedures designed to determine speech recognition scores (SRS) in an objective manner can be helpful in disability compensation cases where the worker claims to have poor speech perception due to exposure to noise or ototoxins. Such measures can also be helpful in determining SRS in individuals who cannot provide reliable responses to speech stimuli, including patients with Alzheimer's disease, traumatic brain injuries, and infants with and without hearing loss. Cost-effective neural monitoring hardware and software is being rapidly refined due to the high demand for neurogaming (games involving the use of brain-computer interfaces), health, and other applications. More specifically, two related advances in neuro-technology include relative ease in recording neural activity and availability of sophisticated analysing techniques. These techniques are reviewed in the current article and their applications for developing objective SRS procedures are proposed. Issues related to neuroaudioethics (ethics related to collection of neural data evoked by auditory stimuli including speech) and neurosecurity (preservation of a person's neural mechanisms and free will) are also discussed.

  5. Hearing in noisy environments: noise invariance and contrast gain control

    PubMed Central

    Willmore, Ben D B; Cooke, James E; King, Andrew J

    2014-01-01

    Contrast gain control has recently been identified as a fundamental property of the auditory system. Electrophysiological recordings in ferrets have shown that neurons continuously adjust their gain (their sensitivity to change in sound level) in response to the contrast of sounds that are heard. At the level of the auditory cortex, these gain changes partly compensate for changes in sound contrast. This means that sounds which are structurally similar, but have different contrasts, have similar neuronal representations in the auditory cortex. As a result, the cortical representation is relatively invariant to stimulus contrast and robust to the presence of noise in the stimulus. In the inferior colliculus (an important subcortical auditory structure), gain changes are less reliably compensatory, suggesting that contrast- and noise-invariant representations are constructed gradually as one ascends the auditory pathway. In addition to noise invariance, contrast gain control provides a variety of computational advantages over static neuronal representations; it makes efficient use of neuronal dynamic range, may contribute to redundancy-reducing, sparse codes for sound and allows for simpler decoding of population responses. The circuits underlying auditory contrast gain control are still under investigation. As in the visual system, these circuits may be modulated by factors other than stimulus contrast, forming a potential neural substrate for mediating the effects of attention as well as interactions between the senses. PMID:24907308

  6. Translational control of auditory imprinting and structural plasticity by eIF2α

    PubMed Central

    Batista, Gervasio; Johnson, Jennifer Leigh; Dominguez, Elena; Costa-Mattioli, Mauro; Pena, Jose L

    2016-01-01

    The formation of imprinted memories during a critical period is crucial for vital behaviors, including filial attachment. Yet, little is known about the underlying molecular mechanisms. Using a combination of behavior, pharmacology, in vivo surface sensing of translation (SUnSET) and DiOlistic labeling we found that, translational control by the eukaryotic translation initiation factor 2 alpha (eIF2α) bidirectionally regulates auditory but not visual imprinting and related changes in structural plasticity in chickens. Increasing phosphorylation of eIF2α (p-eIF2α) reduces translation rates and spine plasticity, and selectively impairs auditory imprinting. By contrast, inhibition of an eIF2α kinase or blocking the translational program controlled by p-eIF2α enhances auditory imprinting. Importantly, these manipulations are able to reopen the critical period. Thus, we have identified a translational control mechanism that selectively underlies auditory imprinting. Restoring translational control of eIF2α holds the promise to rejuvenate adult brain plasticity and restore learning and memory in a variety of cognitive disorders. DOI: http://dx.doi.org/10.7554/eLife.17197.001 PMID:28009255

  7. Differential Diagnosis of Speech Sound Disorder (Phonological Disorder): Audiological Assessment beyond the Pure-tone Audiogram.

    PubMed

    Iliadou, Vasiliki Vivian; Chermak, Gail D; Bamiou, Doris-Eva

    2015-04-01

    According to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, diagnosis of speech sound disorder (SSD) requires a determination that it is not the result of other congenital or acquired conditions, including hearing loss or neurological conditions that may present with similar symptomatology. To examine peripheral and central auditory function for the purpose of determining whether a peripheral or central auditory disorder was an underlying factor or contributed to the child's SSD. Central auditory processing disorder clinic pediatric case reports. Three clinical cases are reviewed of children with diagnosed SSD who were referred for audiological evaluation by their speech-language pathologists as a result of slower than expected progress in therapy. Audiological testing revealed auditory deficits involving peripheral auditory function or the central auditory nervous system. These cases demonstrate the importance of increasing awareness among professionals of the need to fully evaluate the auditory system to identify auditory deficits that could contribute to a patient's speech sound (phonological) disorder. Audiological assessment in cases of suspected SSD should not be limited to pure-tone audiometry given its limitations in revealing the full range of peripheral and central auditory deficits, deficits which can compromise treatment of SSD. American Academy of Audiology.

  8. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans

    PubMed Central

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  9. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    PubMed

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  10. Task-specific reorganization of the auditory cortex in deaf humans

    PubMed Central

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-01

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964

  11. Task-specific reorganization of the auditory cortex in deaf humans.

    PubMed

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  12. Background noise can enhance cortical auditory evoked potentials under certain conditions

    PubMed Central

    Papesh, Melissa A.; Billings, Curtis J.; Baltzell, Lucas S.

    2017-01-01

    Objective To use cortical auditory evoked potentials (CAEPs) to understand neural encoding in background noise and the conditions under which noise enhances CAEP responses. Methods CAEPs from 16 normal-hearing listeners were recorded using the speech syllable/ba/presented in quiet and speech-shaped noise at signal-to-noise ratios of 10 and 30 dB. The syllable was presented binaurally and monaurally at two presentation rates. Results The amplitudes of N1 and N2 peaks were often significantly enhanced in the presence of low-level background noise relative to quiet conditions, while P1 and P2 amplitudes were consistently reduced in noise. P1 and P2 amplitudes were significantly larger during binaural compared to monaural presentations, while N1 and N2 peaks were similar between binaural and monaural conditions. Conclusions Methodological choices impact CAEP peaks in very different ways. Negative peaks can be enhanced by background noise in certain conditions, while positive peaks are generally enhanced by binaural presentations. Significance Methodological choices significantly impact CAEPs acquired in quiet and in noise. If CAEPs are to be used as a tool to explore signal encoding in noise, scientists must be cognizant of how differences in acquisition and processing protocols selectively shape CAEP responses. PMID:25453611

  13. Auditory Middle Latency Response and Phonological Awareness in Students with Learning Disabilities

    PubMed Central

    Romero, Ana Carla Leite; Funayama, Carolina Araújo Rodrigues; Capellini, Simone Aparecida; Frizzo, Ana Claudia Figueiredo

    2015-01-01

    Introduction Behavioral tests of auditory processing have been applied in schools and highlight the association between phonological awareness abilities and auditory processing, confirming that low performance on phonological awareness tests may be due to low performance on auditory processing tests. Objective To characterize the auditory middle latency response and the phonological awareness tests and to investigate correlations between responses in a group of children with learning disorders. Methods The study included 25 students with learning disabilities. Phonological awareness and auditory middle latency response were tested with electrodes placed on the left and right hemispheres. The correlation between the measurements was performed using the Spearman rank correlation coefficient. Results There is some correlation between the tests, especially between the Pa component and syllabic awareness, where moderate negative correlation is observed. Conclusion In this study, when phonological awareness subtests were performed, specifically phonemic awareness, the students showed a low score for the age group, although for the objective examination, prolonged Pa latency in the contralateral via was observed. Negative weak to moderate correlation for Pa wave latency was observed, as was positive weak correlation for Na-Pa amplitude. PMID:26491479

  14. Brainstem auditory-evoked potentials as an objective tool for evaluating hearing dysfunction in traumatic brain injury.

    PubMed

    Lew, Henry L; Lee, Eun Ha; Miyoshi, Yasushi; Chang, Douglas G; Date, Elaine S; Jerger, James F

    2004-03-01

    Because of the violent nature of traumatic brain injury, traumatic brain injury patients are susceptible to various types of trauma involving the auditory system. We report a case of a 55-yr-old man who presented with communication problems after traumatic brain injury. Initial results from behavioral audiometry and Weber/Rinne tests were not reliable because of poor cooperation. He was transferred to our service for inpatient rehabilitation, where review of the initial head computed tomographic scan showed only left temporal bone fracture. Brainstem auditory-evoked potential was then performed to evaluate his hearing function. The results showed bilateral absence of auditory-evoked responses, which strongly suggested bilateral deafness. This finding led to a follow-up computed tomographic scan, with focus on bilateral temporal bones. A subtle transverse fracture of the right temporal bone was then detected, in addition to the left temporal bone fracture previously identified. Like children with hearing impairment, traumatic brain injury patients may not be able to verbalize their auditory deficits in a timely manner. If hearing loss is suspected in a patient who is unable to participate in traditional behavioral audiometric testing, brainstem auditory-evoked potential may be an option for evaluating hearing dysfunction.

  15. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning

    PubMed Central

    Strait, Dana L.; Kraus, Nina

    2013-01-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians’ subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model by which to study mechanisms of experience-dependent changes in auditory function in humans. PMID:23988583

  16. Auditory conflict and congruence in frontotemporal dementia.

    PubMed

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  17. When vision is not an option: children's integration of auditory and haptic information is suboptimal.

    PubMed

    Petrini, Karin; Remark, Alicia; Smith, Louise; Nardini, Marko

    2014-05-01

    When visual information is available, human adults, but not children, have been shown to reduce sensory uncertainty by taking a weighted average of sensory cues. In the absence of reliable visual information (e.g. extremely dark environment, visual disorders), the use of other information is vital. Here we ask how humans combine haptic and auditory information from childhood. In the first experiment, adults and children aged 5 to 11 years judged the relative sizes of two objects in auditory, haptic, and non-conflicting bimodal conditions. In , different groups of adults and children were tested in non-conflicting and conflicting bimodal conditions. In , adults reduced sensory uncertainty by integrating the cues optimally, while children did not. In , adults and children used similar weighting strategies to solve audio-haptic conflict. These results suggest that, in the absence of visual information, optimal integration of cues for discrimination of object size develops late in childhood. © 2014 The Authors. Developmental Science Published by John Wiley & Sons Ltd.

  18. Evidence for cue-independent spatial representation in the human auditory cortex during active listening.

    PubMed

    Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher

    2017-09-05

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.

  19. Evidence for cue-independent spatial representation in the human auditory cortex during active listening

    PubMed Central

    McLaughlin, Susan A.; Rinne, Teemu; Stecker, G. Christopher

    2017-01-01

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues—particularly interaural time and level differences (ITD and ILD)—that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and—critically—for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues. PMID:28827357

  20. Age-dependent impairment of auditory processing under spatially focused and divided attention: an electrophysiological study.

    PubMed

    Wild-Wall, Nele; Falkenstein, Michael

    2010-01-01

    By using event-related potentials (ERPs) the present study examines if age-related differences in preparation and processing especially emerge during divided attention. Binaurally presented auditory cues called for focused (valid and invalid) or divided attention to one or both ears. Responses were required to subsequent monaurally presented valid targets (vowels), but had to be suppressed to non-target vowels or invalidly cued vowels. Middle-aged participants were more impaired under divided attention than young ones, likely due to an age-related decline in preparatory attention following cues as was reflected in a decreased CNV. Under divided attention, target processing was increased in the middle-aged, likely reflecting compensatory effort to fulfill task requirements in the difficult condition. Additionally, middle-aged participants processed invalidly cued stimuli more intensely as was reflected by stimulus ERPs. The results suggest an age-related impairment in attentional preparation after auditory cues especially under divided attention and latent difficulties to suppress irrelevant information.

  1. An Objective Measurement of the Build-Up of Auditory Streaming and of Its Modulation by Attention

    ERIC Educational Resources Information Center

    Thompson, Sarah K.; Carlyon, Robert P.; Cusack, Rhodri

    2011-01-01

    Three experiments studied auditory streaming using sequences of alternating "ABA" triplets, where "A" and "B" were 50-ms tones differing in frequency by [delta]f semitones and separated by 75-ms gaps. Experiment 1 showed that detection of a short increase in the gap between a B tone and the preceding A tone, imposed on one ABA triplet, was better…

  2. Auditory skills, language development, and adaptive behavior of children with cochlear implants and additional disabilities

    PubMed Central

    Beer, Jessica; Harris, Michael S.; Kronenberger, William G.; Holt, Rachael Frush; Pisoni, David B.

    2012-01-01

    Objective The objective of this study was to evaluate the development of functional auditory skills, language, and adaptive behavior in deaf children with cochlear implants (CI) who also have additional disabilities (AD). Design A two-group, pre-test versus post-test design was used. Study sample Comparisons were made between 23 children with CIs and ADs, and an age-matched comparison group of 23 children with CIs without ADs (No-AD). Assessments were obtained pre-CI and within 12 months post-CI. Results All but two deaf children with ADs improved in auditory skills using the IT-MAIS. Most deaf children in the AD group made progress in receptive but not expressive language using the Preschool Language Scale, but their language quotients were lower than the No-AD group. Five of eight children with ADs made progress in daily living skills and socialization skills; two made progress in motor skills. Children with ADs who did not make progress in language, did show progress in adaptive behavior. Conclusions Children with deafness and ADs made progress in functional auditory skills, receptive language, and adaptive behavior. Expanded assessment that includes adaptive functioning and multi-center collaboration is recommended to best determine benefits of implantation in areas of expected growth in this clinical population. PMID:22509948

  3. Exploring the Relationship between Physiological Measures of Cochlear and Brainstem Function

    PubMed Central

    Dhar, S.; Abel, R.; Hornickel, J.; Nicol, T.; Skoe, E.; Zhao, W.; Kraus, N.

    2009-01-01

    Objective Otoacoustic emissions and the speech-evoked auditory brainstem response are objective indices of peripheral auditory physiology and are used clinically for assessing hearing function. While each measure has been extensively explored, their interdependence and the relationships between them remain relatively unexplored. Methods Distortion product otoacoustic emissions (DPOAE) and speech-evoked auditory brainstem responses (sABR) were recorded from 28 normal-hearing adults. Through correlational analyses, DPOAE characteristics were compared to measures of sABR timing and frequency encoding. Data were organized into two DPOAE (Strength and Structure) and five brainstem (Onset, Spectrotemporal, Harmonics, Envelope Boundary, Pitch) composite measures. Results DPOAE Strength shows significant relationships with sABR Spectrotemporal and Harmonics measures. DPOAE Structure shows significant relationships with sABR Envelope Boundary. Neither DPOAE Strength nor Structure is related to sABR Pitch. Conclusions The results of the present study show that certain aspects of the speech-evoked auditory brainstem responses are related to, or covary with, cochlear function as measured by distortion product otoacoustic emissions. Significance These results form a foundation for future work in clinical populations. Analyzing cochlear and brainstem function in parallel in different clinical populations will provide a more sensitive clinical battery for identifying the locus of different disorders (e.g., language based learning impairments, hearing impairment). PMID:19346159

  4. Representation of Sound Objects within Early-Stage Auditory Areas: A Repetition Effect Study Using 7T fMRI

    PubMed Central

    Da Costa, Sandra; Bourquin, Nathalie M.-P.; Knebel, Jean-François; Saenz, Melissa; van der Zwaag, Wietske; Clarke, Stephanie

    2015-01-01

    Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds. PMID:25938430

  5. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences.

    PubMed

    Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael

    2014-01-01

    Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called "cocktail-party" problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.

  6. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences

    PubMed Central

    Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael

    2014-01-01

    Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called “cocktail-party” problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments. PMID:25540608

  7. Prediction of Auditory and Visual P300 Brain-Computer Interface Aptitude

    PubMed Central

    Halder, Sebastian; Hammer, Eva Maria; Kleih, Sonja Claudia; Bogdan, Martin; Rosenstiel, Wolfgang; Birbaumer, Niels; Kübler, Andrea

    2013-01-01

    Objective Brain-computer interfaces (BCIs) provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)) or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball. Methods Forty healthy participants performed an electroencephalography (EEG) based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude. Results Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy. Conclusions Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection. Significance Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population. PMID:23457444

  8. Different patterns of modality dominance across development.

    PubMed

    Barnhart, Wesley R; Rivera, Samuel; Robinson, Christopher W

    2018-01-01

    The present study sought to better understand how children, young adults, and older adults attend and respond to multisensory information. In Experiment 1, young adults were presented with two spoken words, two pictures, or two word-picture pairings and they had to determine if the two stimuli/pairings were exactly the same or different. Pairing the words and pictures together slowed down visual but not auditory response times and delayed the latency of first fixations, both of which are consistent with a proposed mechanism underlying auditory dominance. Experiment 2 examined the development of modality dominance in children, young adults, and older adults. Cross-modal presentation attenuated visual accuracy and slowed down visual response times in children, whereas older adults showed the opposite pattern, with cross-modal presentation attenuating auditory accuracy and slowing down auditory response times. Cross-modal presentation also delayed first fixations in children and young adults. Mechanisms underlying modality dominance and multisensory processing are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Auditory Localization Performance with Gamma Integrated Eye and Ear Protection

    DTIC Science & Technology

    2016-12-01

    is unlimited. v List of Figures Fig. 1 IEEP device under test: shown with clear ballistic lens and Comply foam earphone tips...conducted with the system active. Fig. 1 IEEP device under test: shown with clear ballistic lens and Comply foam earphone tips. 1.2 Auditory...controlled acoustic facilities with large loudspeaker arrays. Method 1 has 2 listener orientations (0° and 45°) and specifies the use of 8

  10. Educators Prescriptive Handbook: A Developmental Sequence of Learning Skills.

    ERIC Educational Resources Information Center

    Santa Ana Unified School District, CA.

    The handbook lists 141 developmental objectives with instructions for remediation to aid children with learning problems in the areas of sensory motor development, auditory perception, language, visual perception, and academic achievement. Objectives are listed in chart format with each objective associated with one or more skill examples,…

  11. Bottom-up influences of voice continuity in focusing selective auditory attention

    PubMed Central

    Bressler, Scott; Masud, Salwa; Bharadwaj, Hari; Shinn-Cunningham, Barbara

    2015-01-01

    Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the “unit” on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings. PMID:24633644

  12. Bottom-up influences of voice continuity in focusing selective auditory attention.

    PubMed

    Bressler, Scott; Masud, Salwa; Bharadwaj, Hari; Shinn-Cunningham, Barbara

    2014-01-01

    Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the "unit" on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings.

  13. Differences between Dyslexic and Non-Dyslexic Children in the Performance of Phonological Visual-Auditory Recognition Tasks: An Eye-Tracking Study

    PubMed Central

    Tiadi, Aimé; Seassau, Magali; Gerard, Christophe-Loïc; Bucci, Maria Pia

    2016-01-01

    The object of this study was to explore further phonological visual-auditory recognition tasks in a group of fifty-six healthy children (mean age: 9.9 ± 0.3) and to compare these data to those recorded in twenty-six age-matched dyslexic children (mean age: 9.8 ± 0.2). Eye movements from both eyes were recorded using an infrared video-oculography system (MobileEBT® e(y)e BRAIN). The recognition task was performed under four conditions in which the target object was displayed either with phonologically unrelated objects (baseline condition), or with cohort or rhyme objects (cohort and rhyme conditions, respectively), or both together (rhyme + cohort condition). The percentage of the total time spent on the targets and the latency of the first saccade on the target were measured. Results in healthy children showed that the percentage of the total time spent in the baseline condition was significantly longer than in the other conditions, and that the latency of the first saccade in the cohort condition was significantly longer than in the other conditions; interestingly, the latency decreased significantly with the increasing age of the children. The developmental trend of phonological awareness was also observed in healthy children only. In contrast, we observed that for dyslexic children the total time spent on the target was similar in all four conditions tested, and also that they had similar latency values in both cohort and rhyme conditions. These findings suggest a different sensitivity to the phonological competitors between dyslexic and non-dyslexic children. Also, the eye-tracking technique provides online information about phonological awareness capabilities in children. PMID:27438352

  14. The function of the left anterior temporal pole: evidence from acute stroke and infarct volume

    PubMed Central

    Tsapkini, Kyrana; Frangakis, Constantine E.

    2011-01-01

    The role of the anterior temporal lobes in cognition and language has been much debated in the literature over the last few years. Most prevailing theories argue for an important role of the anterior temporal lobe as a semantic hub or a place for the representation of unique entities such as proper names of peoples and places. Lately, a few studies have investigated the role of the most anterior part of the left anterior temporal lobe, the left temporal pole in particular, and argued that the left anterior temporal pole is the area responsible for mapping meaning on to sound through evidence from tasks such as object naming. However, another recent study indicates that bilateral anterior temporal damage is required to cause a clinically significant semantic impairment. In the present study, we tested these hypotheses by evaluating patients with acute stroke before reorganization of structure–function relationships. We compared a group of 20 patients with acute stroke with anterior temporal pole damage to a group of 28 without anterior temporal pole damage matched for infarct volume. We calculated the average percent error in auditory comprehension and naming tasks as a function of infarct volume using a non-parametric regression method. We found that infarct volume was the only predictive variable in the production of semantic errors in both auditory comprehension and object naming tasks. This finding favours the hypothesis that left unilateral anterior temporal pole lesions, even acutely, are unlikely to cause significant deficits in mapping meaning to sound by themselves, although they contribute to networks underlying both naming and comprehension of objects. Therefore, the anterior temporal lobe may be a semantic hub for object meaning, but its role must be represented bilaterally and perhaps redundantly. PMID:21685458

  15. Metal Sounds Stiffer than Drums for Ears, but Not Always for Hands: Low-Level Auditory Features Affect Multisensory Stiffness Perception More than High-Level Categorical Information

    PubMed Central

    Liu, Juan; Ando, Hiroshi

    2016-01-01

    Most real-world events stimulate multiple sensory modalities simultaneously. Usually, the stiffness of an object is perceived haptically. However, auditory signals also contain stiffness-related information, and people can form impressions of stiffness from the different impact sounds of metal, wood, or glass. To understand whether there is any interaction between auditory and haptic stiffness perception, and if so, whether the inferred material category is the most relevant auditory information, we conducted experiments using a force-feedback device and the modal synthesis method to present haptic stimuli and impact sound in accordance with participants’ actions, and to modulate low-level acoustic parameters, i.e., frequency and damping, without changing the inferred material categories of sound sources. We found that metal sounds consistently induced an impression of stiffer surfaces than did drum sounds in the audio-only condition, but participants haptically perceived surfaces with modulated metal sounds as significantly softer than the same surfaces with modulated drum sounds, which directly opposes the impression induced by these sounds alone. This result indicates that, although the inferred material category is strongly associated with audio-only stiffness perception, low-level acoustic parameters, especially damping, are more tightly integrated with haptic signals than the material category is. Frequency played an important role in both audio-only and audio-haptic conditions. Our study provides evidence that auditory information influences stiffness perception differently in unisensory and multisensory tasks. Furthermore, the data demonstrated that sounds with higher frequency and/or shorter decay time tended to be judged as stiffer, and contact sounds of stiff objects had no effect on the haptic perception of soft surfaces. We argue that the intrinsic physical relationship between object stiffness and acoustic parameters may be applied as prior knowledge to achieve robust estimation of stiffness in multisensory perception. PMID:27902718

  16. Distinct functional contributions of primary sensory and association areas to audiovisual integration in object categorization.

    PubMed

    Werner, Sebastian; Noppeney, Uta

    2010-02-17

    Multisensory interactions have been demonstrated in a distributed neural system encompassing primary sensory and higher-order association areas. However, their distinct functional roles in multisensory integration remain unclear. This functional magnetic resonance imaging study dissociated the functional contributions of three cortical levels to multisensory integration in object categorization. Subjects actively categorized or passively perceived noisy auditory and visual signals emanating from everyday actions with objects. The experiment included two 2 x 2 factorial designs that manipulated either (1) the presence/absence or (2) the informativeness of the sensory inputs. These experimental manipulations revealed three patterns of audiovisual interactions. (1) In primary auditory cortices (PACs), a concurrent visual input increased the stimulus salience by amplifying the auditory response regardless of task-context. Effective connectivity analyses demonstrated that this automatic response amplification is mediated via both direct and indirect [via superior temporal sulcus (STS)] connectivity to visual cortices. (2) In STS and intraparietal sulcus (IPS), audiovisual interactions sustained the integration of higher-order object features and predicted subjects' audiovisual benefits in object categorization. (3) In the left ventrolateral prefrontal cortex (vlPFC), explicit semantic categorization resulted in suppressive audiovisual interactions as an index for multisensory facilitation of semantic retrieval and response selection. In conclusion, multisensory integration emerges at multiple processing stages within the cortical hierarchy. The distinct profiles of audiovisual interactions dissociate audiovisual salience effects in PACs, formation of object representations in STS/IPS and audiovisual facilitation of semantic categorization in vlPFC. Furthermore, in STS/IPS, the profiles of audiovisual interactions were behaviorally relevant and predicted subjects' multisensory benefits in performance accuracy.

  17. Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm.

    PubMed

    Stropahl, Maren; Bauer, Anna-Katharina R; Debener, Stefan; Bleichner, Martin G

    2018-01-01

    Electroencephalography (EEG) source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA). ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat). Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM). We then apply the method of dynamical statistical parametric mapping (dSPM) to obtain physiologically plausible EEG source estimates. Finally, we show how to perform group level analysis in the time domain on anatomically defined regions of interest (auditory scout). The proposed pipeline needs to be tailored to the specific datasets and paradigms. However, the straightforward combination of EEGLAB and Brainstorm analysis tools may be of interest to others performing EEG source localization.

  18. Pre-attentive auditory discrimination skill in Indian classical vocal musicians and non-musicians.

    PubMed

    Sanju, Himanshu Kumar; Kumar, Prawin

    2016-09-01

    To test for pre-attentive auditory discrimination skills in Indian classical vocal musicians and non-musicians. Mismatch negativity (MMN) was recorded to test for pre-attentive auditory discrimination skills with a pair of stimuli of /1000 Hz/ and /1100 Hz/, with /1000 Hz/ as the frequent stimulus and /1100 Hz/ as the infrequent stimulus. Onset, offset and peak latencies were the considered latency parameters, whereas peak amplitude and area under the curve were considered for amplitude analysis. Exactly 50 participants, out of which the experimental group had 25 adult Indian classical vocal musicians and 25 age-matched non-musicians served as the control group, were included in the study. Experimental group participants had a minimum professional music experience in Indian classic vocal music of 10 years. However, control group participants did not have any formal training in music. Descriptive statistics showed better waveform morphology in the experimental group as compared to the control. MANOVA showed significantly better onset latency, peak amplitude and area under the curve in the experimental group but no significant difference in the offset and peak latencies between the two groups. The present study probably points towards the enhancement of pre-attentive auditory discrimination skills in Indian classical vocal musicians compared to non-musicians. It indicates that Indian classical musical training enhances pre-attentive auditory discrimination skills in musicians, leading to higher peak amplitude and a greater area under the curve compared to non-musicians.

  19. Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.

    PubMed

    Shahin, Antoine J; Backer, Kristina C; Rosenblum, Lawrence D; Kerlin, Jess R

    2018-02-14

    Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ ( illusion-fa ), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ ( illusion-ba ), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba , and a reduced N1 when they perceived illusion-fa , mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex. SIGNIFICANCE STATEMENT The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator). Copyright © 2018 the authors 0270-6474/18/381835-15$15.00/0.

  20. Effects of hearing aids in the balance, quality of life and fear to fall in elderly people with sensorineural hearing loss

    PubMed Central

    Lacerda, Clara Fonseca; Silva, Luciana Oliveira e; de Tavares Canto, Roberto Sérgio; Cheik, Nadia Carla

    2012-01-01

    Summary Introduction: The aging process provokes structural modifications and functional to it greets, compromising the postural control and central processing. Studies have boarded the necessity to identify to the harmful factors of risk to aged the auditory health and security in stricken aged by auditory deficits and with alterations of balance. Objective: To evaluate the effect of auditory prosthesis in the quality of life, the balance and the fear of fall in aged with bilateral auditory loss. Method: Carried through clinical and experimental study with 56 aged ones with sensorineural auditory loss, submitted to the use of auditory prosthesis of individual sonorous amplification (AASI). The aged ones had answered to the questionnaires of quality of life Short Form Health Survey (SF-36), Falls Efficacy International Scale- (FES-I) and the test of Berg Balance Scale (BBS). After 4 months, the aged ones that they adapted to the use of the AASI had been reevaluated. Results: It had 50% of adaptation of the aged ones to the AASI. It was observed that the masculine sex had greater difficulty in adapting to the auditory device and that the variable age, degree of loss, presence of humming and vertigo had not intervened with the adaptation to auditory prosthesis. It had improvement of the quality of life in the dominance of the State General Health (EGS) and Functional Capacity (CF) and of the humming, as well as the increase of the auto-confidence after adaptation of auditory prosthesis. Conclusion: The use of auditory prosthesis provided the improvement of the domains of the quality of life, what it reflected consequently in one better auto-confidence and in the long run in the reduction of the fear of fall in aged with sensorineural auditory loss. PMID:25991930

  1. Use of transcranial direct current stimulation for the treatment of auditory hallucinations of schizophrenia – a systematic review

    PubMed Central

    Pondé, Pedro H; de Sena, Eduardo P; Camprodon, Joan A; de Araújo, Arão Nogueira; Neto, Mário F; DiBiasi, Melany; Baptista, Abrahão Fontes; Moura, Lidia MVR; Cosmo, Camila

    2017-01-01

    Introduction Auditory hallucinations are defined as experiences of auditory perceptions in the absence of a provoking external stimulus. They are the most prevalent symptoms of schizophrenia with high capacity for chronicity and refractoriness during the course of disease. The transcranial direct current stimulation (tDCS) – a safe, portable, and inexpensive neuromodulation technique – has emerged as a promising treatment for the management of auditory hallucinations. Objective The aim of this study is to analyze the level of evidence in the literature available for the use of tDCS as a treatment for auditory hallucinations in schizophrenia. Methods A systematic review was performed, searching in the main electronic databases including the Cochrane Library and MEDLINE/PubMed. The searches were performed by combining descriptors, applying terms of the Medical Subject Headings (MeSH) of Descriptors of Health Sciences and descriptors contractions. PRISMA protocol was used as a guide and the terms used were the clinical outcomes (“Schizophrenia” OR “Auditory Hallucinations” OR “Auditory Verbal Hallucinations” OR “Psychosis”) searched together (“AND”) with interventions (“transcranial Direct Current Stimulation” OR “tDCS” OR “Brain Polarization”). Results Six randomized controlled trials that evaluated the effects of tDCS on the severity of auditory hallucinations in schizophrenic patients were selected. Analysis of the clinical results of these studies pointed toward incongruence in the information with regard to the therapeutic use of tDCS with a view to reducing the severity of auditory hallucinations in schizophrenia. Only three studies revealed a therapeutic benefit, manifested by reductions in severity and frequency of auditory verbal hallucinations in schizophrenic patients. Conclusion Although tDCS has shown promising results in reducing the severity of auditory hallucinations in schizophrenic patients, this technique cannot yet be used as a therapeutic alternative due to lack of studies with large sample sizes that portray the positive effects that have been described. PMID:28203084

  2. Auditory Training with Frequent Communication Partners

    ERIC Educational Resources Information Center

    Tye-Murray, Nancy; Spehar, Brent; Sommers, Mitchell; Barcroft, Joe

    2016-01-01

    Purpose: Individuals with hearing loss engage in auditory training to improve their speech recognition. They typically practice listening to utterances spoken by unfamiliar talkers but never to utterances spoken by their most frequent communication partner (FCP)--speech they most likely desire to recognize--under the assumption that familiarity…

  3. A possible role for a paralemniscal auditory pathway in the coding of slow temporal information

    PubMed Central

    Abrams, Daniel A.; Nicol, Trent; Zecker, Steven; Kraus, Nina

    2010-01-01

    Low frequency temporal information present in speech is critical for normal perception, however the neural mechanism underlying the differentiation of slow rates in acoustic signals is not known. Data from the rat trigeminal system suggest that the paralemniscal pathway may be specifically tuned to code low-frequency temporal information. We tested whether this phenomenon occurs in the auditory system by measuring the representation of temporal rate in lemniscal and paralemniscal auditory thalamus and cortex in guinea pig. Similar to the trigeminal system, responses measured in auditory thalamus indicate that slow rates are differentially represented in a paralemniscal pathway. In cortex, both lemniscal and paralemniscal neurons indicated sensitivity to slow rates. We speculate that a paralemniscal pathway in the auditory system may be specifically tuned to code low frequency temporal information present in acoustic signals. These data suggest that somatosensory and auditory modalities have parallel sub-cortical pathways that separately process slow rates and the spatial representation of the sensory periphery. PMID:21094680

  4. Neurotoxicity of trimethyltin in rat cochlear organotypic cultures

    PubMed Central

    Yu, Jintao; Ding, Dalian; Sun, Hong; Salvi, Richard; Roth, Jerome A.

    2015-01-01

    Trimethyltin (TMT), which has a variety of applications in industry and agricultural is a neurotoxin that is known to affect the auditory system as well as central nervous system (CNS) of humans and experimental animals. However, the mechanisms underlying TMT-induced auditory dysfunction are poorly understood. To gain insights into the neurotoxic effect of TMT on the peripheral auditory system, we treated cochlear organotypic cultures with concentrations of TMT ranging from 5 to 100 μM for 24 h. Interestingly, TMT preferentially damaged auditory nerve fibers and spiral ganglion neurons in a dose-dependent manner, but had no noticeable effects on the sensory hair cells at the doses employed. TMT-induced damage to auditory neurons was associated with significant soma shrinkage, nuclear condensation and activation of caspase-3, biomarkers indicative of apoptotic cell death. Our findings show that TMT is exclusively neurotoxicity in rat cochlear organotypic culture and that TMT-induced auditory neuron death occurs through a caspase-mediated apoptotic pathway. PMID:25957118

  5. How challenges in auditory fMRI led to general advancements for the field.

    PubMed

    Talavage, Thomas M; Hall, Deborah A

    2012-08-15

    In the early years of fMRI research, the auditory neuroscience community sought to expand its knowledge of the underlying physiology of hearing, while also seeking to come to grips with the inherent acoustic disadvantages of working in the fMRI environment. Early collaborative efforts between prominent auditory research laboratories and prominent fMRI centers led to development of a number of key technical advances that have subsequently been widely used to elucidate principles of auditory neurophysiology. Perhaps the key imaging advance was the simultaneous and parallel development of strategies to use pulse sequences in which the volume acquisitions were "clustered," providing gaps in which stimuli could be presented without direct masking. Such sequences have become widespread in fMRI studies using auditory stimuli and also in a range of translational research domains. This review presents the parallel stories of the people and the auditory neurophysiology research that led to these sequences. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Current understanding of auditory neuropathy.

    PubMed

    Boo, Nem-Yun

    2008-12-01

    Auditory neuropathy is defined by the presence of normal evoked otoacoustic emissions (OAE) and absent or abnormal auditory brainstem responses (ABR). The sites of lesion could be at the cochlear inner hair cells, spiral ganglion cells of the cochlea, synapse between the inner hair cells and auditory nerve, or the auditory nerve itself. Genetic, infectious or neonatal/perinatal insults are the 3 most commonly identified underlying causes. Children usually present with delay in speech and language development while adult patients present with hearing loss and disproportionately poor speech discrimination for the degree of hearing loss. Although cochlear implant is the treatment of choice, current evidence show that it benefits only those patients with endocochlear lesions, but not those with cochlear nerve deficiency or central nervous system disorders. As auditory neuropathy is a disorder with potential long-term impact on a child's development, early hearing screen using both OAE and ABR should be carried out on all newborns and infants to allow early detection and intervention.

  7. Psychoacoustics

    NASA Astrophysics Data System (ADS)

    Moore, Brian C. J.

    Psychoacoustics psychological is concerned with the relationships between the physical characteristics of sounds and their perceptual attributes. This chapter describes: the absolute sensitivity of the auditory system for detecting weak sounds and how that sensitivity varies with frequency; the frequency selectivity of the auditory system (the ability to resolve or hear out the sinusoidal components in a complex sound) and its characterization in terms of an array of auditory filters; the processes that influence the masking of one sound by another; the range of sound levels that can be processed by the auditory system; the perception and modeling of loudness; level discrimination; the temporal resolution of the auditory system (the ability to detect changes over time); the perception and modeling of pitch for pure and complex tones; the perception of timbre for steady and time-varying sounds; the perception of space and sound localization; and the mechanisms underlying auditory scene analysis that allow the construction of percepts corresponding to individual sounds sources when listening to complex mixtures of sounds.

  8. Options for Auditory Training for Adults with Hearing Loss.

    PubMed

    Olson, Anne D

    2015-11-01

    Hearing aid devices alone do not adequately compensate for sensory losses despite significant technological advances in digital technology. Overall use rates of amplification among adults with hearing loss remain low, and overall satisfaction and performance in noise can be improved. Although improved technology may partially address some listening problems, auditory training may be another alternative to improve speech recognition in noise and satisfaction with devices. The literature underlying auditory plasticity following placement of sensory devices suggests that additional auditory training may be needed for reorganization of the brain to occur. Furthermore, training may be required to acquire optimal performance from devices. Several auditory training programs that are readily accessible for adults with hearing loss, hearing aids, or cochlear implants are described. Programs that can be accessed via Web-based formats and smartphone technology are reviewed. A summary table is provided for easy access to programs with descriptions of features that allow hearing health care providers to assist clients in selecting the most appropriate auditory training program to fit their needs.

  9. Modulating Human Auditory Processing by Transcranial Electrical Stimulation

    PubMed Central

    Heimrath, Kai; Fiene, Marina; Rufener, Katharina S.; Zaehle, Tino

    2016-01-01

    Transcranial electrical stimulation (tES) has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation, transcranial alternating current stimulation, and transcranial random noise stimulation has emerged. While the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders. PMID:27013969

  10. Separating pitch chroma and pitch height in the human brain

    PubMed Central

    Warren, J. D.; Uppenkamp, S.; Patterson, R. D.; Griffiths, T. D.

    2003-01-01

    Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas. PMID:12909719

  11. Separating pitch chroma and pitch height in the human brain.

    PubMed

    Warren, J D; Uppenkamp, S; Patterson, R D; Griffiths, T D

    2003-08-19

    Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas.

  12. Electrically-evoked frequency-following response (EFFR) in the auditory brainstem of guinea pigs.

    PubMed

    He, Wenxin; Ding, Xiuyong; Zhang, Ruxiang; Chen, Jing; Zhang, Daoxing; Wu, Xihong

    2014-01-01

    It is still a difficult clinical issue to decide whether a patient is a suitable candidate for a cochlear implant and to plan postoperative rehabilitation, especially for some special cases, such as auditory neuropathy. A partial solution to these problems is to preoperatively evaluate the functional integrity of the auditory neural pathways. For evaluating the strength of phase-locking of auditory neurons, which was not reflected in previous methods using electrically evoked auditory brainstem response (EABR), a new method for recording phase-locking related auditory responses to electrical stimulation, called the electrically evoked frequency-following response (EFFR), was developed and evaluated using guinea pigs. The main objective was to assess feasibility of the method by testing whether the recorded signals reflected auditory neural responses or artifacts. The results showed the following: 1) the recorded signals were evoked by neuron responses rather than by artifact; 2) responses evoked by periodic signals were significantly higher than those evoked by the white noise; 3) the latency of the responses fell in the expected range; 4) the responses decreased significantly after death of the guinea pigs; and 5) the responses decreased significantly when the animal was replaced by an electrical resistance. All of these results suggest the method was valid. Recording obtained using complex tones with a missing fundamental component and using pure tones with various frequencies were consistent with those obtained using acoustic stimulation in previous studies.

  13. Baseline vestibular and auditory findings in a trial of post-concussive syndrome

    PubMed

    Meehan, Anna; Searing, Elizabeth; Weaver, Lindell; Lewandowski, Andrew

    2016-01-01

    Previous studies have reported high rates of auditory and vestibular-balance deficits immediately following head injury. This study uses a comprehensive battery of assessments to characterize auditory and vestibular function in 71 U.S. military service members with chronic symptoms following mild traumatic brain injury that did not resolve with traditional interventions. The majority of the study population reported hearing loss (70%) and recent vestibular symptoms (83%). Central auditory deficits were most prevalent, with 58% of participants failing the SCAN3:A screening test and 45% showing abnormal responses on auditory steady-state response testing presented at a suprathreshold intensity. Only 17% of the participants had abnormal hearing (⟩25 dB hearing loss) based on the pure-tone average. Objective vestibular testing supported significant deficits in this population, regardless of whether the participant self-reported active symptoms. Composite score on the Sensory Organization Test was lower than expected from normative data (mean 69.6 ±vestibular tests, vestibulo-ocular reflex, central auditory dysfunction, mild traumatic brain injury, post-concussive symptoms, hearing15.6). High abnormality rates were found in funduscopy torsion (58%), oculomotor assessments (49%), ocular and cervical vestibular evoked myogenic potentials (46% and 33%, respectively), and monothermal calorics (40%). It is recommended that a full peripheral and central auditory, oculomotor, and vestibular-balance evaluation be completed on military service members who have sustained head trauma.

  14. Visual influences on auditory spatial learning

    PubMed Central

    King, Andrew J.

    2008-01-01

    The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967

  15. Leftward lateralization of auditory cortex underlies holistic sound perception in Williams syndrome.

    PubMed

    Wengenroth, Martina; Blatow, Maria; Bendszus, Martin; Schneider, Peter

    2010-08-23

    Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.

  16. The speech naturalness of people who stutter speaking under delayed auditory feedback as perceived by different groups of listeners.

    PubMed

    Van Borsel, John; Eeckhout, Hannelore

    2008-09-01

    This study investigated listeners' perception of the speech naturalness of people who stutter (PWS) speaking under delayed auditory feedback (DAF) with particular attention for possible listener differences. Three panels of judges consisting of 14 stuttering individuals, 14 speech language pathologists, and 14 naive listeners rated the naturalness of speech samples of stuttering and non-stuttering individuals using a 9-point interval scale. Results clearly indicate that these three groups evaluate naturalness differently. Naive listeners appear to be more severe in their judgements than speech language pathologists and stuttering listeners, and speech language pathologists are apparently more severe than PWS. The three listener groups showed similar trends with respect to the relationship between speech naturalness and speech rate. Results of all three indicated that for PWS, the slower a speaker's rate was, the less natural speech was judged to sound. The three listener groups also showed similar trends with regard to naturalness of the stuttering versus the non-stuttering individuals. All three panels considered the speech of the non-stuttering participants more natural. The reader will be able to: (1) discuss the speech naturalness of people who stutter speaking under delayed auditory feedback, (2) discuss listener differences about the naturalness of people who stutter speaking under delayed auditory feedback, and (3) discuss the importance of speech rate for the naturalness of speech.

  17. Acute Inactivation of Primary Auditory Cortex Causes a Sound Localisation Deficit in Ferrets

    PubMed Central

    Wood, Katherine C.; Town, Stephen M.; Atilgan, Huriye; Jones, Gareth P.

    2017-01-01

    The objective of this study was to demonstrate the efficacy of acute inactivation of brain areas by cooling in the behaving ferret and to demonstrate that cooling auditory cortex produced a localisation deficit that was specific to auditory stimuli. The effect of cooling on neural activity was measured in anesthetized ferret cortex. The behavioural effect of cooling was determined in a benchmark sound localisation task in which inactivation of primary auditory cortex (A1) is known to impair performance. Cooling strongly suppressed the spontaneous and stimulus-evoked firing rates of cortical neurons when the cooling loop was held at temperatures below 10°C, and this suppression was reversed when the cortical temperature recovered. Cooling of ferret auditory cortex during behavioural testing impaired sound localisation performance, with unilateral cooling producing selective deficits in the hemifield contralateral to cooling, and bilateral cooling producing deficits on both sides of space. The deficit in sound localisation induced by inactivation of A1 was not caused by motivational or locomotor changes since inactivation of A1 did not affect localisation of visual stimuli in the same context. PMID:28099489

  18. The influence of cochlear implants on behaviour problems in deaf children.

    PubMed

    Jiménez-Romero, Ma Salud

    2015-01-01

    This study seeks to analyse the relationship between behaviour problems in deaf children and their auditory and communication development subsequent to cochlear implantation and to examine the incidence of these problems in comparison to their hearing peers. This study uses an ex post facto prospective design with a sample of 208 Spanish children, of whom 104 were deaf subjects with cochlear implants. The first objective assesses the relationships between behaviour problems, auditory integration, and social and communication skills in the group of deaf children. The second compares the frequency and intensity of behaviour problems of the group of deaf children with their hearing peers. The correlation analysis showed a significant association between the internal index of behaviour problems and auditory integration and communication skills, such that deaf children with greater auditory and communication development had no behaviour problems. When comparing behaviour problems in deaf children versus their hearing peers, behavioural disturbances are significantly more frequent in the former. According to these findings, cochlear implants may not guarantee adequate auditory and communicative development that would normalise the behaviour of deaf children.

  19. Auditory perception and the control of spatially coordinated action of deaf and hearing children.

    PubMed

    Savelsbergh, G J; Netelenbos, J B; Whiting, H T

    1991-03-01

    From birth onwards, auditory stimulation directs and intensifies visual orientation behaviour. In deaf children, by definition, auditory perception cannot take place and cannot, therefore, make a contribution to visual orientation to objects approaching from outside the initial field of view. In experiment 1, a difference in catching ability is demonstrated between deaf and hearing children (10-13 years of age) when the ball approached from the periphery or from outside the field of view. No differences in catching ability between the two groups occurred when the ball approached from within the field of view. A second experiment was conducted in order to determine if differences in catching ability between deaf and hearing children could be attributed to execution of slow orientating movements and/or slow reaction time as a result of the auditory loss. The deaf children showed slower reaction times. No differences were found in movement times between deaf and hearing children. Overall, the findings suggest that a lack of auditory stimulation during development can lead to deficiencies in the coordination of actions such as catching which are both spatially and temporally constrained.

  20. Significance of auditory and kinesthetic feedback to singers' pitch control.

    PubMed

    Mürbe, Dirk; Pabst, Friedemann; Hofmann, Gert; Sundberg, Johan

    2002-03-01

    An accurate control of fundamental frequency (F0) is required from singers. This control relies on auditory and kinesthetic feedback. However, a loud accompaniment may mask the auditory feedback, leaving the singers to rely on kinesthetic feedback. The object of the present study was to estimate the significance of auditory and kinesthetic feedback to pitch control in 28 students beginning a professional solo singing education. The singers sang an ascending and descending triad pattern covering their entire pitch range with and without masking noise in legato and staccato and in a slow and a fast tempo. F0 was measured by means of a computer program. The interval sizes between adjacent tones were determined and their departures from equally tempered tuning were calculated. The deviations from this tuning were used as a measure of the accuracy of intonation. Statistical analysis showed a significant effect of masking that amounted to a mean impairment of pitch accuracy by 14 cent across all subjects. Furthermore, significant effects were found of tempo as well as of the staccato/legato conditions. The results indicate that auditory feedback contributes significantly to singers' control of pitch.

  1. Attentional Gain Control of Ongoing Cortical Speech Representations in a “Cocktail Party”

    PubMed Central

    Kerlin, Jess R.; Shahin, Antoine J.; Miller, Lee M.

    2010-01-01

    Normal listeners possess the remarkable perceptual ability to select a single speech stream among many competing talkers. However, few studies of selective attention have addressed the unique nature of speech as a temporally extended and complex auditory object. We hypothesized that sustained selective attention to speech in a multi-talker environment would act as gain control on the early auditory cortical representations of speech. Using high-density electroencephalography and a template-matching analysis method, we found selective gain to the continuous speech content of an attended talker, greatest at a frequency of 4–8 Hz, in auditory cortex. In addition, the difference in alpha power (8–12 Hz) at parietal sites across hemispheres indicated the direction of auditory attention to speech, as has been previously found in visual tasks. The strength of this hemispheric alpha lateralization, in turn, predicted an individual’s attentional gain of the cortical speech signal. These results support a model of spatial speech stream segregation, mediated by a supramodal attention mechanism, enabling selection of the attended representation in auditory cortex. PMID:20071526

  2. Functional Connectivity between Face-Movement and Speech-Intelligibility Areas during Auditory-Only Speech Perception

    PubMed Central

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026

  3. Lesion localization of speech comprehension deficits in chronic aphasia

    PubMed Central

    Binder, Jeffrey R.; Humphries, Colin; Gross, William L.; Book, Diane S.

    2017-01-01

    Objective: Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Methods: Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. Results: ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Conclusions: Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. PMID:28179469

  4. Dyslexia and Specific Language Impairment: The Role of Phonology and Auditory Processing

    ERIC Educational Resources Information Center

    Fraser, Jill; Goswami, Usha; Conti-Ramsden, Gina

    2010-01-01

    We explore potential similarities between developmental dyslexia (specific reading disability [SRD]) and specific language impairment (SLI) in terms of phonological skills, underlying auditory processing abilities, and nonphonological language skills. Children aged 9 to 11 years with reading and/or language difficulties were recruited and compared…

  5. Cortical Interactions Underlying the Production of Speech Sounds

    ERIC Educational Resources Information Center

    Guenther, Frank H.

    2006-01-01

    Speech production involves the integration of auditory, somatosensory, and motor information in the brain. This article describes a model of speech motor control in which a feedforward control system, involving premotor and primary motor cortex and the cerebellum, works in concert with auditory and somatosensory feedback control systems that…

  6. Effect of Three Classroom Listening Conditions on Speech Intelligibility

    ERIC Educational Resources Information Center

    Ross, Mark; Giolas, Thomas G.

    1971-01-01

    Speech discrimination scores for 13 deaf children were obtained in a classroom under: usual listening condition (hearing aid or not), binaural listening situation using auditory trainer/FM receiver with wireless microphone transmitter turned off, and binaural condition with inputs from auditory trainer/FM receiver and wireless microphone/FM…

  7. Stochastic correlative firing for figure-ground segregation.

    PubMed

    Chen, Zhe

    2005-03-01

    Segregation of sensory inputs into separate objects is a central aspect of perception and arises in all sensory modalities. The figure-ground segregation problem requires identifying an object of interest in a complex scene, in many cases given binaural auditory or binocular visual observations. The computations required for visual and auditory figure-ground segregation share many common features and can be cast within a unified framework. Sensory perception can be viewed as a problem of optimizing information transmission. Here we suggest a stochastic correlative firing mechanism and an associative learning rule for figure-ground segregation in several classic sensory perception tasks, including the cocktail party problem in binaural hearing, binocular fusion of stereo images, and Gestalt grouping in motion perception.

  8. Sensory-motor interactions for vocal pitch monitoring in non-primary human auditory cortex.

    PubMed

    Greenlee, Jeremy D W; Behroozmand, Roozbeh; Larson, Charles R; Jackson, Adam W; Chen, Fangxiang; Hansen, Daniel R; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A

    2013-01-01

    The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (-100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70-150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control.

  9. Sensory-Motor Interactions for Vocal Pitch Monitoring in Non-Primary Human Auditory Cortex

    PubMed Central

    Larson, Charles R.; Jackson, Adam W.; Chen, Fangxiang; Hansen, Daniel R.; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A.

    2013-01-01

    The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (−100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70–150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control. PMID:23577157

  10. Visual Working Memory Capacity for Objects from Different Categories: A Face-Specific Maintenance Effect

    ERIC Educational Resources Information Center

    Wong, Jason H.; Peterson, Matthew S.; Thompson, James C.

    2008-01-01

    The capacity of visual working memory was examined when complex objects from different categories were remembered. Previous studies have not examined how visual similarity affects object memory, though it has long been known that similar-sounding phonological information interferes with rehearsal in auditory working memory. Here, experiments…

  11. Frequency tagging to track the neural processing of contrast in fast, continuous sound sequences.

    PubMed

    Nozaradan, Sylvie; Mouraux, André; Cousineau, Marion

    2017-07-01

    The human auditory system presents a remarkable ability to detect rapid changes in fast, continuous acoustic sequences, as best illustrated in speech and music. However, the neural processing of rapid auditory contrast remains largely unclear, probably due to the lack of methods to objectively dissociate the response components specifically related to the contrast from the other components in response to the sequence of fast continuous sounds. To overcome this issue, we tested a novel use of the frequency-tagging approach allowing contrast-specific neural responses to be tracked based on their expected frequencies. The EEG was recorded while participants listened to 40-s sequences of sounds presented at 8Hz. A tone or interaural time contrast was embedded every fifth sound (AAAAB), such that a response observed in the EEG at exactly 8 Hz/5 (1.6 Hz) or harmonics should be the signature of contrast processing by neural populations. Contrast-related responses were successfully identified, even in the case of very fine contrasts. Moreover, analysis of the time course of the responses revealed a stable amplitude over repetitions of the AAAAB patterns in the sequence, except for the response to perceptually salient contrasts that showed a buildup and decay across repetitions of the sounds. Overall, this new combination of frequency-tagging with an oddball design provides a valuable complement to the classic, transient, evoked potentials approach, especially in the context of rapid auditory information. Specifically, we provide objective evidence on the neural processing of contrast embedded in fast, continuous sound sequences. NEW & NOTEWORTHY Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia might be an impaired processing of fast auditory changes, highlighting how the encoding of rapid acoustic information is critical for auditory communication. Here, we present a novel electrophysiological approach to capture in humans neural markers of contrasts in fast continuous tone sequences. Contrast-specific responses were successfully identified, even for very fine contrasts, providing direct insight on the encoding of rapid auditory information. Copyright © 2017 the American Physiological Society.

  12. Investigating the Impact of Hearing Aid Use and Auditory Training on Cognition, Depressive Symptoms, and Social Interaction in Adults With Hearing Loss: Protocol for a Crossover Trial

    PubMed Central

    Meyer, Denny; Blamey, Peter J; Pipingas, Andrew; Bhar, Sunil

    2018-01-01

    Background Sensorineural hearing loss is the most common sensory deficit among older adults. Some of the psychosocial consequences of this condition include difficulty in understanding speech, depression, and social isolation. Studies have shown that older adults with hearing loss show some age-related cognitive decline. Hearing aids have been proven as successful interventions to alleviate sensorineural hearing loss. In addition to hearing aid use, the positive effects of auditory training—formal listening activities designed to optimize speech perception—are now being documented among adults with hearing loss who use hearing aids, especially new hearing aid users. Auditory training has also been shown to produce prolonged cognitive performance improvements. However, there is still little evidence to support the benefits of simultaneous hearing aid use and individualized face-to-face auditory training on cognitive performance in adults with hearing loss. Objective This study will investigate whether using hearing aids for the first time will improve the impact of individualized face-to-face auditory training on cognition, depression, and social interaction for adults with sensorineural hearing loss. The rationale for this study is based on the hypothesis that, in adults with sensorineural hearing loss, using hearing aids for the first time in combination with individualized face-to-face auditory training will be more effective for improving cognition, depressive symptoms, and social interaction rather than auditory training on its own. Methods This is a crossover trial targeting 40 men and women between 50 and 90 years of age with either mild or moderate symmetric sensorineural hearing loss. Consented, willing participants will be recruited from either an independent living accommodation or via a community database to undergo a 6-month intensive face-to-face auditory training program (active control). Participants will be assigned in random order to receive hearing aid (intervention) for either the first 3 or last 3 months of the 6-month auditory training program. Each participant will be tested at baseline, 3, and 6 months using a neuropsychological battery of computer-based cognitive assessments, together with a depression symptom instrument and a social interaction measure. The primary outcome will be cognitive performance with regard to spatial working memory. Secondary outcome measures include other cognition performance measures, depressive symptoms, social interaction, and hearing satisfaction. Results Data analysis is currently under way and the first results are expected to be submitted for publication in June 2018. Conclusions Results from the study will inform strategies for aural rehabilitation, hearing aid delivery, and future hearing loss intervention trials. Trial Registration ClinicalTrials.gov NCT03112850; https://clinicaltrials.gov/ct2/show/NCT03112850 (Archived by WebCite at http://www.webcitation.org/6xz12fD0B). PMID:29572201

  13. Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

    PubMed Central

    Haley, Katarina L.

    2015-01-01

    Purpose To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not expected to improve speech fluency. Method Ten participants with APH/AOS and 10 neurologically healthy (NH) participants were studied under both feedback conditions. To allow examination of individual responses, we used an ABACA design. Effects were examined on syllable rate, disfluency duration, and vocal intensity. Results Seven of 10 APH/AOS participants increased fluency with masking by increasing rate, decreasing disfluency duration, or both. In contrast, none of the NH participants increased speaking rate with MAF. In the AAF condition, only 1 APH/AOS participant increased fluency. Four APH/AOS participants and 8 NH participants slowed their rate with AAF. Conclusions Speaking with MAF appears to increase fluency in a subset of individuals with APH/AOS, indicating that overreliance on auditory feedback monitoring may contribute to their disorder presentation. The distinction between responders and nonresponders was not linked to AOS diagnosis, so additional work is needed to develop hypotheses for candidacy and underlying control mechanisms. PMID:26363508

  14. Irregular Speech Rate Dissociates Auditory Cortical Entrainment, Evoked Responses, and Frontal Alpha

    PubMed Central

    Kayser, Stephanie J.; Ince, Robin A.A.; Gross, Joachim

    2015-01-01

    The entrainment of slow rhythmic auditory cortical activity to the temporal regularities in speech is considered to be a central mechanism underlying auditory perception. Previous work has shown that entrainment is reduced when the quality of the acoustic input is degraded, but has also linked rhythmic activity at similar time scales to the encoding of temporal expectations. To understand these bottom-up and top-down contributions to rhythmic entrainment, we manipulated the temporal predictive structure of speech by parametrically altering the distribution of pauses between syllables or words, thereby rendering the local speech rate irregular while preserving intelligibility and the envelope fluctuations of the acoustic signal. Recording EEG activity in human participants, we found that this manipulation did not alter neural processes reflecting the encoding of individual sound transients, such as evoked potentials. However, the manipulation significantly reduced the fidelity of auditory delta (but not theta) band entrainment to the speech envelope. It also reduced left frontal alpha power and this alpha reduction was predictive of the reduced delta entrainment across participants. Our results show that rhythmic auditory entrainment in delta and theta bands reflect functionally distinct processes. Furthermore, they reveal that delta entrainment is under top-down control and likely reflects prefrontal processes that are sensitive to acoustical regularities rather than the bottom-up encoding of acoustic features. SIGNIFICANCE STATEMENT The entrainment of rhythmic auditory cortical activity to the speech envelope is considered to be critical for hearing. Previous work has proposed divergent views in which entrainment reflects either early evoked responses related to sound encoding or high-level processes related to expectation or cognitive selection. Using a manipulation of speech rate, we dissociated auditory entrainment at different time scales. Specifically, our results suggest that delta entrainment is controlled by frontal alpha mechanisms and thus support the notion that rhythmic auditory cortical entrainment is shaped by top-down mechanisms. PMID:26538641

  15. Monitoring auditory cortical plasticity in hearing aid users with long latency auditory evoked potentials: a longitudinal study.

    PubMed

    Leite, Renata Aparecida; Magliaro, Fernanda Cristina Leite; Raimundo, Jeziela Cristina; Bento, Ricardo Ferreira; Matas, Carla Gentile

    2018-02-19

    The objective of this study was to compare long-latency auditory evoked potentials before and after hearing aid fittings in children with sensorineural hearing loss compared with age-matched children with normal hearing. Thirty-two subjects of both genders aged 7 to 12 years participated in this study and were divided into two groups as follows: 14 children with normal hearing were assigned to the control group (mean age 9 years and 8 months), and 18 children with mild to moderate symmetrical bilateral sensorineural hearing loss were assigned to the study group (mean age 9 years and 2 months). The children underwent tympanometry, pure tone and speech audiometry and long-latency auditory evoked potential testing with speech and tone burst stimuli. The groups were assessed at three time points. The study group had a lower percentage of positive responses, lower P1-N1 and P2-N2 amplitudes (speech and tone burst), and increased latencies for the P1 and P300 components following the tone burst stimuli. They also showed improvements in long-latency auditory evoked potentials (with regard to both the amplitude and presence of responses) after hearing aid use. Alterations in the central auditory pathways can be identified using P1-N1 and P2-N2 amplitude components, and the presence of these components increases after a short period of auditory stimulation (hearing aid use). These findings emphasize the importance of using these amplitude components to monitor the neuroplasticity of the central auditory nervous system in hearing aid users.

  16. [Auditory rehabilitation programmes for adults: what do we know about their effectiveness?].

    PubMed

    Cardemil, Felipe; Aguayo, Lorena; Fuente, Adrian

    2014-01-01

    Hearing loss ranks third among the health conditions that involve disability-adjusted life years. Hearing aids are the most commonly used treatment option in people with hearing loss. However, a number of auditory rehabilitation programmes have been developed with the aim of improving communicative abilities in people with hearing loss. The objective of this review was to determine the effectiveness of auditory rehabilitation programmes focused on communication strategies. This was a narrative revision. A literature search using PUBMED was carried out. This search included systematic reviews investigating the effectiveness of auditory training and individual and group auditory rehabilitation programmes with the main focus on counselling and communicative strategies for adults with hearing loss. Each study was analysed in terms of the type of intervention used and the results obtained. Three articles were identified: one article about the effectiveness of auditory training programmes and 2 systematic reviews that investigated the effectiveness of communicative programmes in adults with hearing loss. The "Active Communication Education" programme appears to be an effective group programme of auditory rehabilitation that may be used with older Spanish-speaking adults. The utility of hearing aid fitting and communicative programmes as rehabilitation options are associated with improvements in social participation and quality of life in patients with hearing loss, especially group auditory rehabilitation programmes, which seem to have good potential for reducing activity limitations and social participation restrictions, and thus for improving patient quality of life. Copyright © 2013 Elsevier España, S.L. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  17. Vocalization-Induced Enhancement of the Auditory Cortex Responsiveness during Voice F0 Feedback Perturbation

    PubMed Central

    Behroozmand, Roozbeh; Karvelis, Laura; Liu, Hanjun; Larson, Charles R.

    2009-01-01

    Objective The present study investigated whether self-vocalization enhances auditory neural responsiveness to voice pitch feedback perturbation and how this vocalization-induced neural modulation can be affected by the extent of the feedback deviation. Method Event related potentials (ERPs) were recorded in 15 subjects in response to +100, +200 and +500 cents pitch-shifted voice auditory feedback during active vocalization and passive listening to the playback of the self-produced vocalizations. Result The amplitude of the evoked P1 (latency: 73.51 ms) and P2 (latency: 199.55 ms) ERP components in response to feedback perturbation were significantly larger during vocalization than listening. The difference between P2 peak amplitudes during vocalization vs. listening was shown to be significantly larger for +100 than +500 cents stimulus. Conclusion Results indicate that the human auditory cortex is more responsive to voice F0 feedback perturbations during vocalization than passive listening. Greater vocalization-induced enhancement of the auditory responsiveness to smaller feedback perturbations may imply that the audio-vocal system detects and corrects for errors in vocal production that closely match the expected vocal output. Significance Findings of this study support previous suggestions regarding the enhanced auditory sensitivity to feedback alterations during self-vocalization, which may serve the purpose of feedback-based monitoring of one’s voice. PMID:19520602

  18. Pairing broadband noise with cortical stimulation induces extensive suppression of ascending sensory activity

    NASA Astrophysics Data System (ADS)

    Markovitz, Craig D.; Hogan, Patrick S.; Wesen, Kyle A.; Lim, Hubert H.

    2015-04-01

    Objective. The corticofugal system can alter coding along the ascending sensory pathway. Within the auditory system, electrical stimulation of the auditory cortex (AC) paired with a pure tone can cause egocentric shifts in the tuning of auditory neurons, making them more sensitive to the pure tone frequency. Since tinnitus has been linked with hyperactivity across auditory neurons, we sought to develop a new neuromodulation approach that could suppress a wide range of neurons rather than enhance specific frequency-tuned neurons. Approach. We performed experiments in the guinea pig to assess the effects of cortical stimulation paired with broadband noise (PN-Stim) on ascending auditory activity within the central nucleus of the inferior colliculus (CNIC), a widely studied region for AC stimulation paradigms. Main results. All eight stimulated AC subregions induced extensive suppression of activity across the CNIC that was not possible with noise stimulation alone. This suppression built up over time and remained after the PN-Stim paradigm. Significance. We propose that the corticofugal system is designed to decrease the brain’s input gain to irrelevant stimuli and PN-Stim is able to artificially amplify this effect to suppress neural firing across the auditory system. The PN-Stim concept may have potential for treating tinnitus and other neurological disorders.

  19. Detection Rates of Cortical Auditory Evoked Potentials at Different Sensation Levels in Infants with Sensory/Neural Hearing Loss and Auditory Neuropathy Spectrum Disorder

    PubMed Central

    Gardner-Berry, Kirsty; Chang, Hsiuwen; Ching, Teresa Y. C.; Hou, Sanna

    2016-01-01

    With the introduction of newborn hearing screening, infants are being diagnosed with hearing loss during the first few months of life. For infants with a sensory/neural hearing loss (SNHL), the audiogram can be estimated objectively using auditory brainstem response (ABR) testing and hearing aids prescribed accordingly. However, for infants with auditory neuropathy spectrum disorder (ANSD) due to the abnormal/absent ABR waveforms, alternative measures of auditory function are needed to assess the need for amplification and evaluate whether aided benefit has been achieved. Cortical auditory evoked potentials (CAEPs) are used to assess aided benefit in infants with hearing loss; however, there is insufficient information regarding the relationship between stimulus audibility and CAEP detection rates. It is also not clear whether CAEP detection rates differ between infants with SNHL and infants with ANSD. This study involved retrospective collection of CAEP, hearing threshold, and hearing aid gain data to investigate the relationship between stimulus audibility and CAEP detection rates. The results demonstrate that increases in stimulus audibility result in an increase in detection rate. For the same range of sensation levels, there was no difference in the detection rates between infants with SNHL and ANSD. PMID:27587922

  20. Hearing improvement with softband and implanted bone-anchored hearing devices and modified implantation surgery in patients with bilateral microtia-atresia.

    PubMed

    Wang, Yibei; Fan, Xinmiao; Wang, Pu; Fan, Yue; Chen, Xiaowei

    2018-01-01

    To evaluate auditory development and hearing improvement in patients with bilateral microtia-atresia using softband and implanted bone-anchored hearing devices and to modify the implantation surgery. The subjects were divided into two groups: the softband group (40 infants, 3 months to 2 years old, Ponto softband) and the implanted group (6 patients, 6-28 years old, Ponto). The Infant-Toddler Meaning Auditory Integration Scale was used conducted to evaluate auditory development at baseline and after 3, 6, 12, and 24 months, and visual reinforcement audiometry was used to assess the auditory threshold in the softband group. In the implanted group, bone-anchored hearing devices were implanted combined with the auricular reconstruction surgery, and high-resolution CT was used to assess the deformity preoperatively. Auditory threshold and speech discrimination scores of the patients with implants were measured under the unaided, softband, and implanted conditions. Total Infant-Toddler Meaning Auditory Integration Scale scores in the softband group improved significantly and approached normal levels. The average visual reinforcement audiometry values under the unaided and softband conditions were 76.75 ± 6.05 dB HL and 32.25 ± 6.20 dB HL (P < 0.01), respectively. In the implanted group, the auditory thresholds under the unaided, softband, and implanted conditions were 59.17 ± 3.76 dB HL, 32.5 ± 2.74 dB HL, and 17.5 ± 5.24 dB HL (P < 0.01), respectively. The respective speech discrimination scores were 23.33 ± 14.72%, 77.17 ± 6.46%, and 96.50 ± 2.66% (P < 0.01). Using softband bone-anchored hearing devices is effective for auditory development and hearing improvement in infants with bilateral microtia-atresia. Wearing softband bone-anchored hearing devices before auricle reconstruction and combining bone-anchored hearing device implantation with auricular reconstruction surgery may bethe optimal clinical choice for these patients, and results in more significant hearing improvement and minimal surgical and anesthetic injury. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Impaired encoding of rapid pitch information underlies perception and memory deficits in congenital amusia.

    PubMed

    Albouy, Philippe; Cousineau, Marion; Caclin, Anne; Tillmann, Barbara; Peretz, Isabelle

    2016-01-06

    Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia or specific language impairment might be a low-level sensory dysfunction. In the present study we test this hypothesis in congenital amusia, a neurodevelopmental disorder characterized by severe deficits in the processing of pitch-based material. We manipulated the temporal characteristics of auditory stimuli and investigated the influence of the time given to encode pitch information on participants' performance in discrimination and short-term memory. Our results show that amusics' performance in such tasks scales with the duration available to encode acoustic information. This suggests that in auditory neuro-developmental disorders, abnormalities in early steps of the auditory processing can underlie the high-level deficits (here musical disabilities). Observing that the slowing down of temporal dynamics improves amusics' pitch abilities allows considering this approach as a potential tool for remediation in developmental auditory disorders.

  2. Eye closure helps memory by reducing cognitive load and enhancing visualisation.

    PubMed

    Vredeveldt, Annelies; Hitch, Graham J; Baddeley, Alan D

    2011-10-01

    Closing the eyes helps memory. We investigated the mechanisms underlying the eyeclosure effect by exposing 80 eyewitnesses to different types of distraction during the witness interview: blank screen (control), eyes closed, visual distraction, and auditory distraction. We examined the cognitive load hypothesis by comparing any type of distraction (visual or auditory) with minimal distraction (blank screen or eyes closed). We found recall to be significantly better when distraction was minimal, providing evidence that eyeclosure reduces cognitive load. We examined the modality-specific interference hypothesis by comparing the effects of visual and auditory distraction on recall of visual and auditory information. Visual and auditory distraction selectively impaired memory for information presented in the same modality, supporting the role of visualisation in the eyeclosure effect. Analysis of recall in terms of grain size revealed that recall of basic information about the event was robust, whereas recall of specific details was prone to both general and modality-specific disruptions.

  3. Prospects for Replacement of Auditory Neurons by Stem Cells

    PubMed Central

    Shi, Fuxin; Edge, Albert S.B.

    2013-01-01

    Sensorineural hearing loss is caused by degeneration of hair cells or auditory neurons. Spiral ganglion cells, the primary afferent neurons of the auditory system, are patterned during development and send out projections to hair cells and to the brainstem under the control of largely unknown guidance molecules. The neurons do not regenerate after loss and even damage to their projections tends to be permanent. The genesis of spiral ganglion neurons and their synapses forms a basis for regenerative approaches. In this review we critically present the current experimental findings on auditory neuron replacement. We discuss the latest advances with a focus on (a) exogenous stem cell transplantation into the cochlea for neural replacement, (b) expression of local guidance signals in the cochlea after loss of auditory neurons, (c) the possibility of neural replacement from an endogenous cell source, and (d) functional changes from cell engraftment. PMID:23370457

  4. The spectrotemporal filter mechanism of auditory selective attention

    PubMed Central

    Lakatos, Peter; Musacchia, Gabriella; O’Connell, Monica N.; Falchier, Arnaud Y.; Javitt, Daniel C.; Schroeder, Charles E.

    2013-01-01

    SUMMARY While we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, while the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli. PMID:23439126

  5. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    PubMed

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human ERP and psychophysical music listening studies. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Sequencing the Cortical Processing of Pitch-Evoking Stimuli using EEG Analysis and Source Estimation

    PubMed Central

    Butler, Blake E.; Trainor, Laurel J.

    2012-01-01

    Cues to pitch include spectral cues that arise from tonotopic organization and temporal cues that arise from firing patterns of auditory neurons. fMRI studies suggest a common pitch center is located just beyond primary auditory cortex along the lateral aspect of Heschl’s gyrus, but little work has examined the stages of processing for the integration of pitch cues. Using electroencephalography, we recorded cortical responses to high-pass filtered iterated rippled noise (IRN) and high-pass filtered complex harmonic stimuli, which differ in temporal and spectral content. The two stimulus types were matched for pitch saliency, and a mismatch negativity (MMN) response was elicited by infrequent pitch changes. The P1 and N1 components of event-related potentials (ERPs) are thought to arise from primary and secondary auditory areas, respectively, and to result from simple feature extraction. MMN is generated in secondary auditory cortex and is thought to act on feature-integrated auditory objects. We found that peak latencies of both P1 and N1 occur later in response to IRN stimuli than to complex harmonic stimuli, but found no latency differences between stimulus types for MMN. The location of each ERP component was estimated based on iterative fitting of regional sources in the auditory cortices. The sources of both the P1 and N1 components elicited by IRN stimuli were located dorsal to those elicited by complex harmonic stimuli, whereas no differences were observed for MMN sources across stimuli. Furthermore, the MMN component was located between the P1 and N1 components, consistent with fMRI studies indicating a common pitch region in lateral Heschl’s gyrus. These results suggest that while the spectral and temporal processing of different pitch-evoking stimuli involves different cortical areas during early processing, by the time the object-related MMN response is formed, these cues have been integrated into a common representation of pitch. PMID:22740836

  7. Serial auditory-evoked potentials in the diagnosis and monitoring of a child with Landau-Kleffner syndrome.

    PubMed

    Plyler, Erin; Harkrider, Ashley W

    2013-01-01

    A boy, aged 2 1/2 yr, experienced sudden deterioration of speech and language abilities. He saw multiple medical professionals across 2 yr. By almost 5 yr, his vocabulary diminished from 50 words to 4, and he was referred to our speech and hearing center. The purpose of this study was to heighten awareness of Landau-Kleffner syndrome (LKS) and emphasize the importance of an objective test battery that includes serial auditory-evoked potentials (AEPs) to audiologists who often are on the front lines of diagnosis and treatment delivery when faced with a child experiencing unexplained loss of the use of speech and language. Clinical report. Interview revealed a family history of seizure disorder. Normal social behaviors were observed. Acoustic reflexes and otoacoustic emissions were consistent with normal peripheral auditory function. The child could not complete behavioral audiometric testing or auditory processing tests, so serial AEPs were used to examine central nervous system function. Normal auditory brainstem responses, a replicable Na and absent Pa of the middle latency responses, and abnormal slow cortical potentials suggested dysfunction of auditory processing at the cortical level. The child was referred to a neurologist, who confirmed LKS. At age 7 1/2 yr, after 2 1/2 yr of antiepileptic medications, electroencephalographic (EEG) and audiometric measures normalized. Presently, the child communicates manually with limited use of oral information. Audiologists often are one of the first professionals to assess children with loss of speech and language of unknown origin. Objective, noninvasive, serial AEPs are a simple and valuable addition to the central audiometric test battery when evaluating a child with speech and language regression. The inclusion of these tests will markedly increase the chance for early and accurate referral, diagnosis, and monitoring of a child with LKS which is imperative for a positive prognosis. American Academy of Audiology.

  8. Piglets Learn to Use Combined Human-Given Visual and Auditory Signals to Find a Hidden Reward in an Object Choice Task

    PubMed Central

    Bensoussan, Sandy; Cornil, Maude; Meunier-Salaün, Marie-Christine; Tallet, Céline

    2016-01-01

    Although animals rarely use only one sense to communicate, few studies have investigated the use of combinations of different signals between animals and humans. This study assessed for the first time the spontaneous reactions of piglets to human pointing gestures and voice in an object-choice task with a reward. Piglets (Sus scrofa domestica) mainly use auditory signals–individually or in combination with other signals—to communicate with their conspecifics. Their wide hearing range (42 Hz to 40.5 kHz) fits the range of human vocalisations (40 Hz to 1.5 kHz), which may induce sensitivity to the human voice. However, only their ability to use visual signals from humans, especially pointing gestures, has been assessed to date. The current study investigated the effects of signal type (visual, auditory and combined visual and auditory) and piglet experience on the piglets’ ability to locate a hidden food reward over successive tests. Piglets did not find the hidden reward at first presentation, regardless of the signal type given. However, they subsequently learned to use a combination of auditory and visual signals (human voice and static or dynamic pointing gestures) to successfully locate the reward in later tests. This learning process may result either from repeated presentations of the combination of static gestures and auditory signals over successive tests, or from transitioning from static to dynamic pointing gestures, again over successive tests. Furthermore, piglets increased their chance of locating the reward either if they did not go straight to a bowl after entering the test area or if they stared at the experimenter before visiting it. Piglets were not able to use the voice direction alone, indicating that a combination of signals (pointing and voice direction) is necessary. Improving our communication with animals requires adapting to their individual sensitivity to human-given signals. PMID:27792731

  9. Piglets Learn to Use Combined Human-Given Visual and Auditory Signals to Find a Hidden Reward in an Object Choice Task.

    PubMed

    Bensoussan, Sandy; Cornil, Maude; Meunier-Salaün, Marie-Christine; Tallet, Céline

    2016-01-01

    Although animals rarely use only one sense to communicate, few studies have investigated the use of combinations of different signals between animals and humans. This study assessed for the first time the spontaneous reactions of piglets to human pointing gestures and voice in an object-choice task with a reward. Piglets (Sus scrofa domestica) mainly use auditory signals-individually or in combination with other signals-to communicate with their conspecifics. Their wide hearing range (42 Hz to 40.5 kHz) fits the range of human vocalisations (40 Hz to 1.5 kHz), which may induce sensitivity to the human voice. However, only their ability to use visual signals from humans, especially pointing gestures, has been assessed to date. The current study investigated the effects of signal type (visual, auditory and combined visual and auditory) and piglet experience on the piglets' ability to locate a hidden food reward over successive tests. Piglets did not find the hidden reward at first presentation, regardless of the signal type given. However, they subsequently learned to use a combination of auditory and visual signals (human voice and static or dynamic pointing gestures) to successfully locate the reward in later tests. This learning process may result either from repeated presentations of the combination of static gestures and auditory signals over successive tests, or from transitioning from static to dynamic pointing gestures, again over successive tests. Furthermore, piglets increased their chance of locating the reward either if they did not go straight to a bowl after entering the test area or if they stared at the experimenter before visiting it. Piglets were not able to use the voice direction alone, indicating that a combination of signals (pointing and voice direction) is necessary. Improving our communication with animals requires adapting to their individual sensitivity to human-given signals.

  10. Acquired word deafness, and the temporal grain of sound representation in the primary auditory cortex.

    PubMed

    Phillips, D P; Farmer, M E

    1990-11-15

    This paper explores the nature of the processing disorder which underlies the speech discrimination deficit in the syndrome of acquired word deafness following from pathology to the primary auditory cortex. A critical examination of the evidence on this disorder revealed the following. First, the most profound forms of the condition are expressed not only in an isolation of the cerebral linguistic processor from auditory input, but in a failure of even the perceptual elaboration of the relevant sounds. Second, in agreement with earlier studies, we conclude that the perceptual dimension disturbed in word deafness is a temporal one. We argue, however, that it is not a generalized disorder of auditory temporal processing, but one which is largely restricted to the processing of sounds with temporal content in the milliseconds to tens-of-milliseconds time frame. The perceptual elaboration of sounds with temporal content outside that range, in either direction, may survive the disorder. Third, we present neurophysiological evidence that the primary auditory cortex has a special role in the representation of auditory events in that time frame, but not in the representation of auditory events with temporal grains outside that range.

  11. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis

    PubMed Central

    Altieri, Nicholas; Wenger, Michael J.

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of −12 dB, and S/N ratio of −18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity. PMID:24058358

  12. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis.

    PubMed

    Altieri, Nicholas; Wenger, Michael J

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  13. Music and language: relations and disconnections.

    PubMed

    Kraus, Nina; Slater, Jessica

    2015-01-01

    Music and language provide an important context in which to understand the human auditory system. While they perform distinct and complementary communicative functions, music and language are both rooted in the human desire to connect with others. Since sensory function is ultimately shaped by what is biologically important to the organism, the human urge to communicate has been a powerful driving force in both the evolution of auditory function and the ways in which it can be changed by experience within an individual lifetime. This chapter emphasizes the highly interactive nature of the auditory system as well as the depth of its integration with other sensory and cognitive systems. From the origins of music and language to the effects of auditory expertise on the neural encoding of sound, we consider key themes in auditory processing, learning, and plasticity. We emphasize the unique role of the auditory system as the temporal processing "expert" in the brain, and explore relationships between communication and cognition. We demonstrate how experience with music and language can have a significant impact on underlying neural function, and that auditory expertise strengthens some of the very same aspects of sound encoding that are deficient in impaired populations. © 2015 Elsevier B.V. All rights reserved.

  14. Crossmodal attention switching: auditory dominance in temporal discrimination tasks.

    PubMed

    Lukas, Sarah; Philipp, Andrea M; Koch, Iring

    2014-11-01

    Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Modalities of memory: is reading lips like hearing voices?

    PubMed

    Maidment, David W; Macken, Bill; Jones, Dylan M

    2013-12-01

    Functional similarities in verbal memory performance across presentation modalities (written, heard, lipread) are often taken to point to a common underlying representational form upon which the modalities converge. We show here instead that the pattern of performance depends critically on presentation modality and different mechanisms give rise to superficially similar effects across modalities. Lipread recency is underpinned by different mechanisms to auditory recency, and while the effect of an auditory suffix on an auditory list is due to the perceptual grouping of the suffix with the list, the corresponding effect with lipread speech is due to misidentification of the lexical content of the lipread suffix. Further, while a lipread suffix does not disrupt auditory recency, an auditory suffix does disrupt recency for lipread lists. However, this effect is due to attentional capture ensuing from the presentation of an unexpected auditory event, and is evident both with verbal and nonverbal auditory suffixes. These findings add to a growing body of evidence that short-term verbal memory performance is determined by modality-specific perceptual and motor processes, rather than by the storage and manipulation of phonological representations. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Cross-modal versus within-modal recall: differences in behavioral and brain responses.

    PubMed

    Butler, Andrew J; James, Karin H

    2011-10-31

    Although human experience is multisensory in nature, previous research has focused predominantly on memory for unisensory as opposed to multisensory information. In this work, we sought to investigate behavioral and neural differences between the cued recall of cross-modal audiovisual associations versus within-modal visual or auditory associations. Participants were presented with cue-target associations comprised of pairs of nonsense objects, pairs of nonsense sounds, objects paired with sounds, and sounds paired with objects. Subsequently, they were required to recall the modality of the target given the cue while behavioral accuracy, reaction time, and blood oxygenation level dependent (BOLD) activation were measured. Successful within-modal recall was associated with modality-specific reactivation in primary perceptual regions, and was more accurate than cross-modal retrieval. When auditory targets were correctly or incorrectly recalled using a cross-modal visual cue, there was re-activation in auditory association cortex, and recall of information from cross-modal associations activated the hippocampus to a greater degree than within-modal associations. Findings support theories that propose an overlap between regions active during perception and memory, and show that behavioral and neural differences exist between within- and cross-modal associations. Overall the current study highlights the importance of the role of multisensory information in memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Attention to memory: orienting attention to sound object representations.

    PubMed

    Backer, Kristina C; Alain, Claude

    2014-01-01

    Despite a growing acceptance that attention and memory interact, and that attention can be focused on an active internal mental representation (i.e., reflective attention), there has been a paucity of work focusing on reflective attention to 'sound objects' (i.e., mental representations of actual sound sources in the environment). Further research on the dynamic interactions between auditory attention and memory, as well as its degree of neuroplasticity, is important for understanding how sound objects are represented, maintained, and accessed in the brain. This knowledge can then guide the development of training programs to help individuals with attention and memory problems. This review article focuses on attention to memory with an emphasis on behavioral and neuroimaging studies that have begun to explore the mechanisms that mediate reflective attentional orienting in vision and more recently, in audition. Reflective attention refers to situations in which attention is oriented toward internal representations rather than focused on external stimuli. We propose four general principles underlying attention to short-term memory. Furthermore, we suggest that mechanisms involved in orienting attention to visual object representations may also apply for orienting attention to sound object representations.

  18. Perception of non-verbal auditory stimuli in Italian dyslexic children.

    PubMed

    Cantiani, Chiara; Lorusso, Maria Luisa; Valnegri, Camilla; Molteni, Massimo

    2010-01-01

    Auditory temporal processing deficits have been proposed as the underlying cause of phonological difficulties in Developmental Dyslexia. The hypothesis was tested in a sample of 20 Italian dyslexic children aged 8-14, and 20 matched control children. Three tasks of auditory processing of non-verbal stimuli, involving discrimination and reproduction of sequences of rapidly presented short sounds were expressly created. Dyslexic subjects performed more poorly than control children, suggesting the presence of a deficit only partially influenced by the duration of the stimuli and of inter-stimulus intervals (ISIs).

  19. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    PubMed Central

    Zaltz, Yael; Globerson, Eitan; Amir, Noam

    2017-01-01

    The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination, demonstrating highly specific effects for auditory linguistic experience as well. Overall, results suggest that auditory superiority is associated with the specific auditory exposure. PMID:29238318

  20. Object representation in the human auditory system

    PubMed Central

    Winkler, István; van Zuijen, Titia L.; Sussman, Elyse; Horváth, János; Näätänen, Risto

    2010-01-01

    One important principle of object processing is exclusive allocation. Any part of the sensory input, including the border between two objects, can only belong to one object at a time. We tested whether tones forming a spectro-temporal border between two sound patterns can belong to both patterns at the same time. Sequences were composed of low-, intermediate- and high-pitched tones. Tones were delivered with short onset-to-onset intervals causing the high and low tones to automatically form separate low and high sound streams. The intermediate-pitch tones could be perceived as part of either one or the other stream, but not both streams at the same time. Thus these tones formed a pitch ’border’ between the two streams. The tones were presented in a fixed, cyclically repeating order. Linking the intermediate-pitch tones with the high or the low tones resulted in the perception of two different repeating tonal patterns. Participants were instructed to maintain perception of one of the two tone patterns throughout the stimulus sequences. Occasional changes violated either the selected or the alternative tone pattern, but not both at the same time. We found that only violations of the selected pattern elicited the mismatch negativity event-related potential, indicating that only this pattern was represented in the auditory system. This result suggests that individual sounds are processed as part of only one auditory pattern at a time. Thus tones forming a spectro-temporal border are exclusively assigned to one sound object at any given time, as are spatio-temporal borders in vision. PMID:16836636

  1. Modulation of Visually Evoked Postural Responses by Contextual Visual, Haptic and Auditory Information: A ‘Virtual Reality Check’

    PubMed Central

    Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.

    2013-01-01

    Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760

  2. Can spectro-temporal complexity explain the autistic pattern of performance on auditory tasks?

    PubMed

    Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter

    2006-01-01

    To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material (pure tones) and/or low-level operations (detection, labelling, chord disembedding, detection of pitch changes) show a superior level of performance and shorter ERP latencies. In contrast, tasks involving spectrally- and temporally-dynamic material and/or complex operations (evaluation, attention) are poorly performed by autistics, or generate inferior ERP activity or brain activation. Neural complexity required to perform auditory tasks may therefore explain pattern of performance and activation of autistic individuals during auditory tasks.

  3. Preattentive binding of auditory and visual stimulus features.

    PubMed

    Winkler, István; Czigler, István; Sussman, Elyse; Horváth, János; Balázs, Lászlo

    2005-02-01

    We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or longterm retention of the feature-conjoined memory representations may require attentive processes.

  4. Age-related differences in neuromagnetic brain activity underlying concurrent sound perception.

    PubMed

    Alain, Claude; McDonald, Kelly L

    2007-02-07

    Deficits in parsing concurrent auditory events are believed to contribute to older adults' difficulties in understanding speech in adverse listening conditions (e.g., cocktail party). To explore the level at which aging impairs sound segregation, we measured auditory evoked fields (AEFs) using magnetoencephalography while young, middle-aged, and older adults were presented with complex sounds that either had all of their harmonics in tune or had the third harmonic mistuned by 4 or 16% of its original value. During the recording, participants were asked to ignore the stimuli and watch a muted subtitled movie of their choice. For each participant, the AEFs were modeled with a pair of dipoles in the superior temporal plane, and the effects of age and mistuning were examined on the amplitude and latency of the resulting source waveforms. Mistuned stimuli generated an early positivity (60-100 ms), an object-related negativity (ORN) (140-180 ms) that overlapped the N1 and P2 waves, and a positive displacement that peaked at approximately 230 ms (P230) after sound onset. The early mistuning-related enhancement was similar in all three age groups, whereas the subsequent modulations (ORN and P230) were reduced in older adults. These age differences in auditory cortical activity were associated with a reduced likelihood of hearing two sounds as a function of mistuning. The results reveal that inharmonicity is rapidly and automatically registered in all three age groups but that the perception of concurrent sounds declines with age.

  5. Primary Generators of Visually Evoked Field Potentials Recorded in the Macaque Auditory Cortex.

    PubMed

    Kajikawa, Yoshinao; Smiley, John F; Schroeder, Charles E

    2017-10-18

    Prior studies have reported "local" field potential (LFP) responses to faces in the macaque auditory cortex and have suggested that such face-LFPs may be substrates of audiovisual integration. However, although field potentials (FPs) may reflect the synaptic currents of neurons near the recording electrode, due to the use of a distant reference electrode, they often reflect those of synaptic activity occurring in distant sites as well. Thus, FP recordings within a given brain region (e.g., auditory cortex) may be "contaminated" by activity generated elsewhere in the brain. To determine whether face responses are indeed generated within macaque auditory cortex, we recorded FPs and concomitant multiunit activity with linear array multielectrodes across auditory cortex in three macaques (one female), and applied current source density (CSD) analysis to the laminar FP profile. CSD analysis revealed no appreciable local generator contribution to the visual FP in auditory cortex, although we did note an increase in the amplitude of visual FP with cortical depth, suggesting that their generators are located below auditory cortex. In the underlying inferotemporal cortex, we found polarity inversions of the main visual FP components accompanied by robust CSD responses and large-amplitude multiunit activity. These results indicate that face-evoked FP responses in auditory cortex are not generated locally but are volume-conducted from other face-responsive regions. In broader terms, our results underscore the caution that, unless far-field contamination is removed, LFPs in general may reflect such "far-field" activity, in addition to, or in absence of, local synaptic responses. SIGNIFICANCE STATEMENT Field potentials (FPs) can index neuronal population activity that is not evident in action potentials. However, due to volume conduction, FPs may reflect activity in distant neurons superimposed upon that of neurons close to the recording electrode. This is problematic as the default assumption is that FPs originate from local activity, and thus are termed "local" (LFP). We examine this general problem in the context of previously reported face-evoked FPs in macaque auditory cortex. Our findings suggest that face-FPs are indeed generated in the underlying inferotemporal cortex and volume-conducted to the auditory cortex. The note of caution raised by these findings is of particular importance for studies that seek to assign FP/LFP recordings to specific cortical layers. Copyright © 2017 the authors 0270-6474/17/3710139-15$15.00/0.

  6. Primary Generators of Visually Evoked Field Potentials Recorded in the Macaque Auditory Cortex

    PubMed Central

    Smiley, John F.; Schroeder, Charles E.

    2017-01-01

    Prior studies have reported “local” field potential (LFP) responses to faces in the macaque auditory cortex and have suggested that such face-LFPs may be substrates of audiovisual integration. However, although field potentials (FPs) may reflect the synaptic currents of neurons near the recording electrode, due to the use of a distant reference electrode, they often reflect those of synaptic activity occurring in distant sites as well. Thus, FP recordings within a given brain region (e.g., auditory cortex) may be “contaminated” by activity generated elsewhere in the brain. To determine whether face responses are indeed generated within macaque auditory cortex, we recorded FPs and concomitant multiunit activity with linear array multielectrodes across auditory cortex in three macaques (one female), and applied current source density (CSD) analysis to the laminar FP profile. CSD analysis revealed no appreciable local generator contribution to the visual FP in auditory cortex, although we did note an increase in the amplitude of visual FP with cortical depth, suggesting that their generators are located below auditory cortex. In the underlying inferotemporal cortex, we found polarity inversions of the main visual FP components accompanied by robust CSD responses and large-amplitude multiunit activity. These results indicate that face-evoked FP responses in auditory cortex are not generated locally but are volume-conducted from other face-responsive regions. In broader terms, our results underscore the caution that, unless far-field contamination is removed, LFPs in general may reflect such “far-field” activity, in addition to, or in absence of, local synaptic responses. SIGNIFICANCE STATEMENT Field potentials (FPs) can index neuronal population activity that is not evident in action potentials. However, due to volume conduction, FPs may reflect activity in distant neurons superimposed upon that of neurons close to the recording electrode. This is problematic as the default assumption is that FPs originate from local activity, and thus are termed “local” (LFP). We examine this general problem in the context of previously reported face-evoked FPs in macaque auditory cortex. Our findings suggest that face-FPs are indeed generated in the underlying inferotemporal cortex and volume-conducted to the auditory cortex. The note of caution raised by these findings is of particular importance for studies that seek to assign FP/LFP recordings to specific cortical layers. PMID:28924008

  7. Visual processing affects the neural basis of auditory discrimination.

    PubMed

    Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko

    2008-12-01

    The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.

  8. Leftward Lateralization of Auditory Cortex Underlies Holistic Sound Perception in Williams Syndrome

    PubMed Central

    Bendszus, Martin; Schneider, Peter

    2010-01-01

    Background Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. Methodology/Principal Findings Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. Conclusions/Significance There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties. PMID:20808792

  9. Can Spectro-Temporal Complexity Explain the Autistic Pattern of Performance on Auditory Tasks?

    ERIC Educational Resources Information Center

    Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter

    2006-01-01

    To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material…

  10. Where the imaginal appears real: A positron emission tomography study of auditory hallucinations

    PubMed Central

    Szechtman, Henry; Woody, Erik; Bowers, Kenneth S.; Nahmias, Claude

    1998-01-01

    An auditory hallucination shares with imaginal hearing the property of being self-generated and with real hearing the experience of the stimulus being an external one. To investigate where in the brain an auditory event is “tagged” as originating from the external world, we used positron emission tomography to identify neural sites activated by both real hearing and hallucinations but not by imaginal hearing. Regional cerebral blood flow was measured during hearing, imagining, and hallucinating in eight healthy, highly hypnotizable male subjects prescreened for their ability to hallucinate under hypnosis (hallucinators). Control subjects were six highly hypnotizable male volunteers who lacked the ability to hallucinate under hypnosis (nonhallucinators). A region in the right anterior cingulate (Brodmann area 32) was activated in the group of hallucinators when they heard an auditory stimulus and when they hallucinated hearing it but not when they merely imagined hearing it. The same experimental conditions did not yield this activation in the group of nonhallucinators. Inappropriate activation of the right anterior cingulate may lead self-generated thoughts to be experienced as external, producing spontaneous auditory hallucinations. PMID:9465124

  11. Brief report: atypical neuromagnetic responses to illusory auditory pitch in children with autism spectrum disorders.

    PubMed

    Brock, Jon; Bzishvili, Samantha; Reid, Melanie; Hautus, Michael; Johnson, Blake W

    2013-11-01

    Atypical auditory perception is a widely recognised but poorly understood feature of autism. In the current study, we used magnetoencephalography to measure the brain responses of 10 autistic children as they listened passively to dichotic pitch stimuli, in which an illusory tone is generated by sub-millisecond inter-aural timing differences in white noise. Relative to control stimuli that contain no inter-aural timing differences, dichotic pitch stimuli typically elicit an object related negativity (ORN) response, associated with the perceptual segregation of the tone and the carrier noise into distinct auditory objects. Autistic children failed to demonstrate an ORN, suggesting a failure of segregation; however, comparison with the ORNs of age-matched typically developing controls narrowly failed to attain significance. More striking, the autistic children demonstrated a significant differential response to the pitch stimulus, peaking at around 50 ms. This was not present in the control group, nor has it been found in other groups tested using similar stimuli. This response may be a neural signature of atypical processing of pitch in at least some autistic individuals.

  12. Music playing and memory trace: evidence from event-related potentials.

    PubMed

    Kamiyama, Keiko; Katahira, Kentaro; Abla, Dilshat; Hori, Koji; Okanoya, Kazuo

    2010-08-01

    We examined the relationship between motor practice and auditory memory for sound sequences to evaluate the hypothesis that practice involving physical performance might enhance auditory memory. Participants learned two unfamiliar sound sequences using different training methods. Under the key-press condition, they learned a melody while pressing a key during auditory input. Under the no-key-press condition, they listened to another melody without any key pressing. The two melodies were presented alternately, and all participants were trained in both methods. Participants were instructed to pay attention under both conditions. After training, they listened to the two melodies again without pressing keys, and ERPs were recorded. During the ERP recordings, 10% of the tones in these melodies deviated from the originals. The grand-average ERPs showed that the amplitude of mismatch negativity (MMN) elicited by deviant stimuli was larger under the key-press condition than under the no-key-press condition. This effect appeared only in the high absolute pitch group, which included those with a pronounced ability to identify a note without external reference. This result suggests that the effect of training with key pressing was mediated by individual musical skills. Copyright 2010 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  13. Examining the short term effects of emotion under an Adaptation Level Theory model of tinnitus perception.

    PubMed

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2017-03-01

    Existing evidence suggests a strong relationship between tinnitus and emotion. The objective of this study was to examine the effects of short-term emotional changes along valence and arousal dimensions on tinnitus outcomes. Emotional stimuli were presented in two different modalities: auditory and visual. The authors hypothesized that (1) negative valence (unpleasant) stimuli and/or high arousal stimuli will lead to greater tinnitus loudness and annoyance than positive valence and/or low arousal stimuli, and (2) auditory emotional stimuli, which are in the same modality as the tinnitus, will exhibit a greater effect on tinnitus outcome measures than visual stimuli. Auditory and visual emotive stimuli were administered to 22 participants (12 females and 10 males) with chronic tinnitus, recruited via email invitations send out to the University of Auckland Tinnitus Research Volunteer Database. Emotional stimuli used were taken from the International Affective Digital Sounds- Version 2 (IADS-2) and the International Affective Picture System (IAPS) (Bradley and Lang, 2007a, 2007b). The Emotion Regulation Questionnaire (Gross and John, 2003) was administered alongside subjective ratings of tinnitus loudness and annoyance, and psychoacoustic sensation level matches to external sounds. Males had significantly different emotional regulation scores than females. Negative valence emotional auditory stimuli led to higher tinnitus loudness ratings in males and females and higher annoyance ratings in males only; loudness matches of tinnitus remained unchanged. The visual stimuli did not have an effect on tinnitus ratings. The results are discussed relative to the Adaptation Level Theory Model of Tinnitus. The results indicate that the negative valence dimension of emotion is associated with increased tinnitus magnitude judgements and gender effects may also be present, but only when the emotional stimulus is in the auditory modality. Sounds with emotional associations may be used for sound therapy for tinnitus relief; it is of interest to determine whether the emotional component of sound treatments can play a role in reversing the negative responses discussed in this paper. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Classification of underwater target echoes based on auditory perception characteristics

    NASA Astrophysics Data System (ADS)

    Li, Xiukun; Meng, Xiangxia; Liu, Hang; Liu, Mingye

    2014-06-01

    In underwater target detection, the bottom reverberation has some of the same properties as the target echo, which has a great impact on the performance. It is essential to study the difference between target echo and reverberation. In this paper, based on the unique advantage of human listening ability on objects distinction, the Gammatone filter is taken as the auditory model. In addition, time-frequency perception features and auditory spectral features are extracted for active sonar target echo and bottom reverberation separation. The features of the experimental data have good concentration characteristics in the same class and have a large amount of differences between different classes, which shows that this method can effectively distinguish between the target echo and reverberation.

  15. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers

    ERIC Educational Resources Information Center

    Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen

    2017-01-01

    Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…

  16. Independent component analysis for cochlear implant artifacts attenuation from electrically evoked auditory steady-state response measurements

    NASA Astrophysics Data System (ADS)

    Deprez, Hanne; Gransier, Robin; Hofmann, Michael; van Wieringen, Astrid; Wouters, Jan; Moonen, Marc

    2018-02-01

    Objective. Electrically evoked auditory steady-state responses (EASSRs) are potentially useful for objective cochlear implant (CI) fitting and follow-up of the auditory maturation in infants and children with a CI. EASSRs are recorded in the electro-encephalogram (EEG) in response to electrical stimulation with continuous pulse trains, and are distorted by significant CI artifacts related to this electrical stimulation. The aim of this study is to evaluate a CI artifacts attenuation method based on independent component analysis (ICA) for three EASSR datasets. Approach. ICA has often been used to remove CI artifacts from the EEG to record transient auditory responses, such as cortical evoked auditory potentials. Independent components (ICs) corresponding to CI artifacts are then often manually identified. In this study, an ICA based CI artifacts attenuation method was developed and evaluated for EASSR measurements with varying CI artifacts and EASSR characteristics. Artifactual ICs were automatically identified based on their spectrum. Main results. For 40 Hz amplitude modulation (AM) stimulation at comfort level, in high SNR recordings, ICA succeeded in removing CI artifacts from all recording channels, without distorting the EASSR. For lower SNR recordings, with 40 Hz AM stimulation at lower levels, or 90 Hz AM stimulation, ICA either distorted the EASSR or could not remove all CI artifacts in most subjects, except for two of the seven subjects tested with low level 40 Hz AM stimulation. Noise levels were reduced after ICA was applied, and up to 29 ICs were rejected, suggesting poor ICA separation quality. Significance. We hypothesize that ICA is capable of separating CI artifacts and EASSR in case the contralateral hemisphere is EASSR dominated. For small EASSRs or large CI artifact amplitudes, ICA separation quality is insufficient to ensure complete CI artifacts attenuation without EASSR distortion.

  17. Effect of Auditory Constraints on Motor Performance Depends on Stage of Recovery Post-Stroke

    PubMed Central

    Aluru, Viswanath; Lu, Ying; Leung, Alan; Verghese, Joe; Raghavan, Preeti

    2014-01-01

    In order to develop evidence-based rehabilitation protocols post-stroke, one must first reconcile the vast heterogeneity in the post-stroke population and develop protocols to facilitate motor learning in the various subgroups. The main purpose of this study is to show that auditory constraints interact with the stage of recovery post-stroke to influence motor learning. We characterized the stages of upper limb recovery using task-based kinematic measures in 20 subjects with chronic hemiparesis. We used a bimanual wrist extension task, performed with a custom-made wrist trainer, to facilitate learning of wrist extension in the paretic hand under four auditory conditions: (1) without auditory cueing; (2) to non-musical happy sounds; (3) to self-selected music; and (4) to a metronome beat set at a comfortable tempo. Two bimanual trials (15 s each) were followed by one unimanual trial with the paretic hand over six cycles under each condition. Clinical metrics, wrist and arm kinematics, and electromyographic activity were recorded. Hierarchical cluster analysis with the Mahalanobis metric based on baseline speed and extent of wrist movement stratified subjects into three distinct groups, which reflected their stage of recovery: spastic paresis, spastic co-contraction, and minimal paresis. In spastic paresis, the metronome beat increased wrist extension, but also increased muscle co-activation across the wrist. In contrast, in spastic co-contraction, no auditory stimulation increased wrist extension and reduced co-activation. In minimal paresis, wrist extension did not improve under any condition. The results suggest that auditory task constraints interact with stage of recovery during motor learning after stroke, perhaps due to recruitment of distinct neural substrates over the course of recovery. The findings advance our understanding of the mechanisms of progression of motor recovery and lay the foundation for personalized treatment algorithms post-stroke. PMID:25002859

  18. Evaluation of a compact tinnitus therapy by electrophysiological tinnitus decompensation measures.

    PubMed

    Low, Yin Fen; Argstatter, Heike; Bolay, Hans Volker; Strauss, Daniel J

    2008-01-01

    Large-scale neural correlates of the tinnitus decompensation have been identified by using wavelet phase stability criteria of single sweep sequences of auditory late responses (ALRs). Our previous work showed that the synchronization stability in ALR sequences might be used for objective quantification of the tinnitus decompensation and attention which link to Jastreboff tinnitus model. In this study, we intend to provide an objective evaluation for quantifying the effect of music therapy in tinnitus patients. We examined neural correlates of the attentional mechanism in single sweep sequences of ALRs in chronic tinnitus patients who underwent compact therapy course by using the maximum entropy auditory paradigm. Results by our measure showed that the extent of differentiation between attended and unattended conditions improved significantly after the therapy. It is concluded that the wavelet phase synchronization stability of ALRs single sweeps can be used for the objective evaluation of tinnitus therapies, in this case the compact tinnitus music therapy.

  19. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps.

    PubMed

    Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.

  20. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps

    PubMed Central

    Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory. PMID:29059237

  1. Effects of sensorineural hearing loss on temporal coding of narrowband and broadband signals in the auditory periphery

    PubMed Central

    Henry, Kenneth S.; Heinz, Michael G.

    2013-01-01

    People with sensorineural hearing loss have substantial difficulty understanding speech under degraded listening conditions. Behavioral studies suggest that this difficulty may be caused by changes in auditory processing of the rapidly-varying temporal fine structure (TFS) of acoustic signals. In this paper, we review the presently known effects of sensorineural hearing loss on processing of TFS and slower envelope modulations in the peripheral auditory system of mammals. Cochlear damage has relatively subtle effects on phase locking by auditory-nerve fibers to the temporal structure of narrowband signals under quiet conditions. In background noise, however, sensorineural loss does substantially reduce phase locking to the TFS of pure-tone stimuli. For auditory processing of broadband stimuli, sensorineural hearing loss has been shown to severely alter the neural representation of temporal information along the tonotopic axis of the cochlea. Notably, auditory-nerve fibers innervating the high-frequency part of the cochlea grow increasingly responsive to low-frequency TFS information and less responsive to temporal information near their characteristic frequency (CF). Cochlear damage also increases the correlation of the response to TFS across fibers of varying CF, decreases the traveling-wave delay between TFS responses of fibers with different CFs, and can increase the range of temporal modulation frequencies encoded in the periphery for broadband sounds. Weaker neural coding of temporal structure in background noise and degraded coding of broadband signals along the tonotopic axis of the cochlea are expected to contribute considerably to speech perception problems in people with sensorineural hearing loss. PMID:23376018

  2. Contralateral Noise Stimulation Delays P300 Latency in School-Aged Children.

    PubMed

    Ubiali, Thalita; Sanfins, Milaine Dominici; Borges, Leticia Reis; Colella-Santos, Maria Francisca

    2016-01-01

    The auditory cortex modulates auditory afferents through the olivocochlear system, which innervates the outer hair cells and the afferent neurons under the inner hair cells in the cochlea. Most of the studies that investigated the efferent activity in humans focused on evaluating the suppression of the otoacoustic emissions by stimulating the contralateral ear with noise, which assesses the activation of the medial olivocochlear bundle. The neurophysiology and the mechanisms involving efferent activity on higher regions of the auditory pathway, however, are still unknown. Also, the lack of studies investigating the effects of noise on human auditory cortex, especially in peadiatric population, points to the need for recording the late auditory potentials in noise conditions. Assessing the auditory efferents in schoolaged children is highly important due to some of its attributed functions such as selective attention and signal detection in noise, which are important abilities related to the development of language and academic skills. For this reason, the aim of the present study was to evaluate the effects of noise on P300 responses of children with normal hearing. P300 was recorded in 27 children aged from 8 to 14 years with normal hearing in two conditions: with and whitout contralateral white noise stimulation. P300 latencies were significantly longer at the presence of contralateral noise. No significant changes were observed for the amplitude values. Contralateral white noise stimulation delayed P300 latency in a group of school-aged children with normal hearing. These results suggest a possible influence of the medial olivocochlear activation on P300 responses under noise condition.

  3. What drives the perceptual change resulting from speech motor adaptation? Evaluation of hypotheses in a Bayesian modeling framework

    PubMed Central

    Perrier, Pascal; Schwartz, Jean-Luc; Diard, Julien

    2018-01-01

    Shifts in perceptual boundaries resulting from speech motor learning induced by perturbations of the auditory feedback were taken as evidence for the involvement of motor functions in auditory speech perception. Beyond this general statement, the precise mechanisms underlying this involvement are not yet fully understood. In this paper we propose a quantitative evaluation of some hypotheses concerning the motor and auditory updates that could result from motor learning, in the context of various assumptions about the roles of the auditory and somatosensory pathways in speech perception. This analysis was made possible thanks to the use of a Bayesian model that implements these hypotheses by expressing the relationships between speech production and speech perception in a joint probability distribution. The evaluation focuses on how the hypotheses can (1) predict the location of perceptual boundary shifts once the perturbation has been removed, (2) account for the magnitude of the compensation in presence of the perturbation, and (3) describe the correlation between these two behavioral characteristics. Experimental findings about changes in speech perception following adaptation to auditory feedback perturbations serve as reference. Simulations suggest that they are compatible with a framework in which motor adaptation updates both the auditory-motor internal model and the auditory characterization of the perturbed phoneme, and where perception involves both auditory and somatosensory pathways. PMID:29357357

  4. Testing the dual-pathway model for auditory processing in human cortex.

    PubMed

    Zündorf, Ida C; Lewald, Jörg; Karnath, Hans-Otto

    2016-01-01

    Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Using Facebook to Reach People Who Experience Auditory Hallucinations.

    PubMed

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-06-14

    Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience auditory hallucinations. Women, people who use mobile phones, and those experiencing more distress, were reportedly more open to using Facebook as a support and/or therapeutic tool in the future. Facebook advertisements can be used to recruit research participants who experience auditory hallucinations quickly and in a cost-effective manner. Most (58%) Web-based respondents are open to Facebook-based support and treatment and are willing to describe their subjective experiences with auditory hallucinations.

  6. Reboxetine Improves Auditory Attention and Increases Norepinephrine Levels in the Auditory Cortex of Chronically Stressed Rats

    PubMed Central

    Pérez-Valenzuela, Catherine; Gárate-Pérez, Macarena F.; Sotomayor-Zárate, Ramón; Delano, Paul H.; Dagnino-Subiabre, Alexies

    2016-01-01

    Chronic stress impairs auditory attention in rats and monoamines regulate neurotransmission in the primary auditory cortex (A1), a brain area that modulates auditory attention. In this context, we hypothesized that norepinephrine (NE) levels in A1 correlate with the auditory attention performance of chronically stressed rats. The first objective of this research was to evaluate whether chronic stress affects monoamines levels in A1. Male Sprague–Dawley rats were subjected to chronic stress (restraint stress) and monoamines levels were measured by high performance liquid chromatographer (HPLC)-electrochemical detection. Chronically stressed rats had lower levels of NE in A1 than did controls, while chronic stress did not affect serotonin (5-HT) and dopamine (DA) levels. The second aim was to determine the effects of reboxetine (a selective inhibitor of NE reuptake) on auditory attention and NE levels in A1. Rats were trained to discriminate between two tones of different frequencies in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance of ≥80% correct trials in the 2-ACT were randomly assigned to control and stress experimental groups. To analyze the effects of chronic stress on the auditory task, trained rats of both groups were subjected to 50 2-ACT trials 1 day before and 1 day after of the chronic stress period. A difference score (DS) was determined by subtracting the number of correct trials after the chronic stress protocol from those before. An unexpected result was that vehicle-treated control rats and vehicle-treated chronically stressed rats had similar performances in the attentional task, suggesting that repeated injections with vehicle were stressful for control animals and deteriorated their auditory attention. In this regard, both auditory attention and NE levels in A1 were higher in chronically stressed rats treated with reboxetine than in vehicle-treated animals. These results indicate that NE has a key role in A1 and attention of stressed rats during tone discrimination. PMID:28082872

  7. Behavioral semantics of learning and crossmodal processing in auditory cortex: the semantic processor concept.

    PubMed

    Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram

    2011-01-01

    Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of different excitatory and inhibitory mechanisms and to distinct spatiotemporal metrics of map activation to represent a sound. The described non-auditory firing and modulations of auditory responses suggest that auditory cortex, by collecting all necessary information, functions as a "semantic processor" deducing the task-specific meaning of sounds by learning. © 2010. Published by Elsevier B.V.

  8. Evaluation of auditory perception development in neonates by event-related potential technique.

    PubMed

    Zhang, Qinfen; Li, Hongxin; Zheng, Aibin; Dong, Xuan; Tu, Wenjuan

    2017-08-01

    To investigate auditory perception development in neonates and correlate it with days after birth, left and right hemisphere development and sex using event-related potential (ERP) technique. Sixty full-term neonates, consisting of 32 males and 28 females, aged 2-28days were included in this study. An auditory oddball paradigm was used to elicit ERPs. N2 wave latencies and areas were recorded at different days after birth, to study on relationship between auditory perception and age, and comparison of left and right hemispheres, and males and females. Average wave forms of ERPs in neonates started from relatively irregular flat-bottomed troughs to relatively regular steep-sided ripples. A good linear relationship between ERPs and days after birth in neonates was observed. As days after birth increased, N2 latencies gradually and significantly shortened, and N2 areas gradually and significantly increased (both P<0.01). N2 areas in the central part of the brain were significantly greater, and N2 latencies in the central part were significantly shorter in the left hemisphere compared with the right, indicative of left hemisphere dominance (both P<0.05). N2 areas were greater and N2 latencies shorter in female neonates compared with males. The neonatal period is one of rapid auditory perception development. In the days following birth, the auditory perception ability of neonates gradually increases. This occurs predominantly in the left hemisphere, with auditory perception ability appearing to develop earlier in female neonates than in males. ERP can be used as an objective index used to evaluate auditory perception development in neonates. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  9. Auditory Evoked Potentials for the Evaluation of Hearing Sensitivity in Navy Dolphins. Assessment of Hearing Sensitivity in Adult Male Elephant Seals

    DTIC Science & Technology

    2006-12-01

    Biology of Marine Mammals, San Diego, California, 12 - 16 December. Finneran, J. J. and Houser, D. S. 2004. Objective measures of steady-state...Gervais’ beaked whale auditory evoked potential hearing measurements. 16th Biennial Conference on the Biology of Marine Mammals, San Diego, California...Biennial Conference on the Biology of Marine Mammals, San Diego, California, 12 - 16 December. 16 FTR N00014-04-1-0455 BIOMIMETICA Invited Lectures

  10. Auditory risk estimates for youth target shooting

    PubMed Central

    Meinke, Deanna K.; Murphy, William J.; Finan, Donald S.; Lankford, James E.; Flamme, Gregory A.; Stewart, Michael; Soendergaard, Jacob; Jerome, Trevor W.

    2015-01-01

    Objective To characterize the impulse noise exposure and auditory risk for youth recreational firearm users engaged in outdoor target shooting events. The youth shooting positions are typically standing or sitting at a table, which places the firearm closer to the ground or reflective surface when compared to adult shooters. Design Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit suggested by the World Health Organization (1999) for children. Study sample Impulses were generated by 26 firearm/ammunition configurations representing rifles, shotguns, and pistols used by youth. Measurements were obtained relative to a youth shooter’s left ear. Results All firearms generated peak levels that exceeded the 120 dB peak limit suggested by the WHO for children. In general, shooting from the seated position over a tabletop increases the peak levels, LAeq8 and reduces the unprotected maximum permissible exposures (MPEs) for both rifles and pistols. Pistols pose the greatest auditory risk when fired over a tabletop. Conclusion Youth should utilize smaller caliber weapons, preferably from the standing position, and always wear hearing protection whenever engaging in shooting activities to reduce the risk for auditory damage. PMID:24564688

  11. Auditory Risk of Air Rifles

    PubMed Central

    Lankford, James E.; Meinke, Deanna K.; Flamme, Gregory A.; Finan, Donald S.; Stewart, Michael; Tasko, Stephen; Murphy, William J.

    2016-01-01

    Objective To characterize the impulse noise exposure and auditory risk for air rifle users for both youth and adults. Design Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit and LAeq75 exposure limit suggested by the World Health Organization (1999) for children. Study sample Impulses were generated by 9 pellet air rifles and 1 BB air rifle. Results None of the air rifles generated peak levels that exceeded the 140 dB peak limit for adults and 8 (80%) exceeded the 120 dB peak SPL limit for youth. In general, for both adults and youth there is minimal auditory risk when shooting less than 100 unprotected shots with pellet air rifles. Air rifles with suppressors were less hazardous than those without suppressors and the pellet air rifles with higher velocities were generally more hazardous than those with lower velocities. Conclusion To minimize auditory risk, youth should utilize air rifles with an integrated suppressor and lower velocity ratings. Air rifle shooters are advised to wear hearing protection whenever engaging in shooting activities in order to gain self-efficacy and model appropriate hearing health behaviors necessary for recreational firearm use. PMID:26840923

  12. Auditory evoked potentials to abrupt pitch and timbre change of complex tones: electrophysiological evidence of 'streaming'?

    PubMed

    Jones, S J; Longe, O; Vaz Pato, M

    1998-03-01

    Examination of the cortical auditory evoked potentials to complex tones changing in pitch and timbre suggests a useful new method for investigating higher auditory processes, in particular those concerned with 'streaming' and auditory object formation. The main conclusions were: (i) the N1 evoked by a sudden change in pitch or timbre was more posteriorly distributed than the N1 at the onset of the tone, indicating at least partial segregation of the neuronal populations responsive to sound onset and spectral change; (ii) the T-complex was consistently larger over the right hemisphere, consistent with clinical and PET evidence for particular involvement of the right temporal lobe in the processing of timbral and musical material; (iii) responses to timbral change were relatively unaffected by increasing the rate of interspersed changes in pitch, suggesting a mechanism for detecting the onset of a new voice in a constantly modulated sound stream; (iv) responses to onset, offset and pitch change of complex tones were relatively unaffected by interfering tones when the latter were of a different timbre, suggesting these responses must be generated subsequent to auditory stream segregation.

  13. A Brain for Speech. Evolutionary Continuity in Primate and Human Auditory-Vocal Processing

    PubMed Central

    Aboitiz, Francisco

    2018-01-01

    In this review article, I propose a continuous evolution from the auditory-vocal apparatus and its mechanisms of neural control in non-human primates, to the peripheral organs and the neural control of human speech. Although there is an overall conservatism both in peripheral systems and in central neural circuits, a few changes were critical for the expansion of vocal plasticity and the elaboration of proto-speech in early humans. Two of the most relevant changes were the acquisition of direct cortical control of the vocal fold musculature and the consolidation of an auditory-vocal articulatory circuit, encompassing auditory areas in the temporoparietal junction and prefrontal and motor areas in the frontal cortex. This articulatory loop, also referred to as the phonological loop, enhanced vocal working memory capacity, enabling early humans to learn increasingly complex utterances. The auditory-vocal circuit became progressively coupled to multimodal systems conveying information about objects and events, which gradually led to the acquisition of modern speech. Gestural communication accompanies the development of vocal communication since very early in human evolution, and although both systems co-evolved tightly in the beginning, at some point speech became the main channel of communication. PMID:29636657

  14. Psychophysiological responses to masked auditory stimuli.

    PubMed

    Borgeat, F; Elie, R; Chaloult, L; Chabot, R

    1985-02-01

    Psychophysiological responses to masked auditory verbal stimuli of increasing intensities were studied in twenty healthy women. Two experimental sessions corresponding to two stimulation contents (neutral or emotional) were conducted. At each session, two different sets of instructions (attending or not attending to stimuli) were used successively. Verbal stimuli, masked by a 40-dB white noise, were presented to the subject at increasing intensities by increments of 5 dB starting at 0 dB. At each increment, frontal EMG, skin conductance and heart rate were recorded. The data were submitted to analyses of variance and covariance. Psychophysiological responses to stimuli below the thresholds of identification and detection were observed. The instruction not to attend the stimuli modified the patterns of physiological responses. The effect of the affective content of the stimuli on responses was stronger when not attending. The results show the possibility of psychophysiological responses to masked auditory stimuli and suggests that psychophysiological parameters can constitute objective and useful measures for research in auditory subliminal perception.

  15. Stability of auditory discrimination and novelty processing in physiological aging.

    PubMed

    Raggi, Alberto; Tasca, Domenica; Rundo, Francesco; Ferri, Raffaele

    2013-01-01

    Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs) allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN) and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.

  16. A comparative study of event-related coupling patterns during an auditory oddball task in schizophrenia

    NASA Astrophysics Data System (ADS)

    Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto

    2015-02-01

    Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.

  17. Plastic brain mechanisms for attaining auditory temporal order judgment proficiency.

    PubMed

    Bernasconi, Fosco; Grivel, Jeremy; Murray, Micah M; Spierer, Lucas

    2010-04-15

    Accurate perception of the order of occurrence of sensory information is critical for the building up of coherent representations of the external world from ongoing flows of sensory inputs. While some psychophysical evidence reports that performance on temporal perception can improve, the underlying neural mechanisms remain unresolved. Using electrical neuroimaging analyses of auditory evoked potentials (AEPs), we identified the brain dynamics and mechanism supporting improvements in auditory temporal order judgment (TOJ) during the course of the first vs. latter half of the experiment. Training-induced changes in brain activity were first evident 43-76 ms post stimulus onset and followed from topographic, rather than pure strength, AEP modulations. Improvements in auditory TOJ accuracy thus followed from changes in the configuration of the underlying brain networks during the initial stages of sensory processing. Source estimations revealed an increase in the lateralization of initially bilateral posterior sylvian region (PSR) responses at the beginning of the experiment to left-hemisphere dominance at its end. Further supporting the critical role of left and right PSR in auditory TOJ proficiency, as the experiment progressed, responses in the left and right PSR went from being correlated to un-correlated. These collective findings provide insights on the neurophysiologic mechanism and plasticity of temporal processing of sounds and are consistent with models based on spike timing dependent plasticity. Copyright 2010 Elsevier Inc. All rights reserved.

  18. Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.

    PubMed

    François, Clément; Schön, Daniele

    2014-02-01

    There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Temporal Information Processing as a Basis for Auditory Comprehension: Clinical Evidence from Aphasic Patients

    ERIC Educational Resources Information Center

    Oron, Anna; Szymaszek, Aneta; Szelag, Elzbieta

    2015-01-01

    Background: Temporal information processing (TIP) underlies many aspects of cognitive functions like language, motor control, learning, memory, attention, etc. Millisecond timing may be assessed by sequencing abilities, e.g. the perception of event order. It may be measured with auditory temporal-order-threshold (TOT), i.e. a minimum time gap…

  20. Functionally Specific Oscillatory Activity Correlates between Visual and Auditory Cortex in the Blind

    ERIC Educational Resources Information Center

    Schepers, Inga M.; Hipp, Joerg F.; Schneider, Till R.; Roder, Brigitte; Engel, Andreas K.

    2012-01-01

    Many studies have shown that the visual cortex of blind humans is activated in non-visual tasks. However, the electrophysiological signals underlying this cross-modal plasticity are largely unknown. Here, we characterize the neuronal population activity in the visual and auditory cortex of congenitally blind humans and sighted controls in a…

  1. Imaginary Companions and Young Children's Responses to Ambiguous Auditory Stimuli: Implications for Typical and Atypical Development

    ERIC Educational Resources Information Center

    Fernyhough, Charles; Bland, Kirsten; Meins, Elizabeth; Coltheart, Max

    2007-01-01

    Background: Previous research has reported a link between imaginary companions (ICs) in middle childhood and the perception of verbal material in ambiguous auditory stimuli. These findings have been interpreted in terms of commonalities in the cognitive processes underlying children's engagement with ICs and adults' reporting of imaginary verbal…

  2. Prevalence of auditory changes in newborns in a teaching hospital

    PubMed Central

    Guimarães, Valeriana de Castro; Barbosa, Maria Alves

    2012-01-01

    Summary Introduction: The precocious diagnosis and the intervention in the deafness are of basic importance in the infantile development. The loss auditory and more prevalent than other joined riots to the birth. Objective: Esteem the prevalence of auditory alterations in just-born in a hospital school. Method: Prospective transversal study that evaluated 226 just-been born, been born in a public hospital, between May of 2008 the May of 2009. Results: Of the 226 screened, 46 (20.4%) had presented absence of emissions, having been directed for the second emission. Of the 26 (56.5%) children who had appeared in the retest, 8 (30.8%) had remained with absence and had been directed to the Otolaryngologist. Five (55.5%) had appeared and had been examined by the doctor. Of these, 3 (75.0%) had presented normal otoscopy, being directed for evaluation of the Evoked Potential Auditory of Brainstem (PEATE). Of the total of studied children, 198 (87.6%) had had presence of emissions in one of the tests and, 2 (0.9%) with deafness diagnosis. Conclusion: The prevalence of auditory alterations in the studied population was of 0,9%. The study it offers given excellent epidemiologists and it presents the first report on the subject, supplying resulted preliminary future implantation and development of a program of neonatal auditory selection. PMID:25991933

  3. Developmental and cross-modal plasticity in deafness: evidence from the P1 and N1 event related potentials in cochlear implanted children.

    PubMed

    Sharma, Anu; Campbell, Julia; Cardon, Garrett

    2015-02-01

    Cortical development is dependent on extrinsic stimulation. As such, sensory deprivation, as in congenital deafness, can dramatically alter functional connectivity and growth in the auditory system. Cochlear implants ameliorate deprivation-induced delays in maturation by directly stimulating the central nervous system, and thereby restoring auditory input. The scenario in which hearing is lost due to deafness and then reestablished via a cochlear implant provides a window into the development of the central auditory system. Converging evidence from electrophysiologic and brain imaging studies of deaf animals and children fitted with cochlear implants has allowed us to elucidate the details of the time course for auditory cortical maturation under conditions of deprivation. Here, we review how the P1 cortical auditory evoked potential (CAEP) provides useful insight into sensitive period cut-offs for development of the primary auditory cortex in deaf children fitted with cochlear implants. Additionally, we present new data on similar sensitive period dynamics in higher-order auditory cortices, as measured by the N1 CAEP in cochlear implant recipients. Furthermore, cortical re-organization, secondary to sensory deprivation, may take the form of compensatory cross-modal plasticity. We provide new case-study evidence that cross-modal re-organization, in which intact sensory modalities (i.e., vision and somatosensation) recruit cortical regions associated with deficient sensory modalities (i.e., auditory) in cochlear implanted children may influence their behavioral outcomes with the implant. Improvements in our understanding of developmental neuroplasticity in the auditory system should lead to harnessing central auditory plasticity for superior clinical technique. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Does the Sound of a Barking Dog Activate its Corresponding Visual Form? An fMRI Investigation of Modality-Specific Semantic Access

    PubMed Central

    Reilly, Jamie; Garcia, Amanda; Binney, Richard J.

    2016-01-01

    Much remains to be learned about the neural architecture underlying word meaning. Fully distributed models of semantic memory predict that the sound of a barking dog will conjointly engage a network of distributed sensorimotor spokes. An alternative framework holds that modality-specific features additionally converge within transmodal hubs. Participants underwent functional MRI while covertly naming familiar objects versus newly learned novel objects from only one of their constituent semantic features (visual form, characteristic sound, or point-light motion representation). Relative to the novel object baseline, familiar concepts elicited greater activation within association regions specific to that presentation modality. Furthermore, visual form elicited activation within high-level auditory association cortex. Conversely, environmental sounds elicited activation in regions proximal to visual association cortex. Both conditions commonly engaged a putative hub region within lateral anterior temporal cortex. These results support hybrid semantic models in which local hubs and distributed spokes are dually engaged in service of semantic memory. PMID:27289210

  5. Acute effects of Delta9-tetrahydrocannabinol and standardized cannabis extract on the auditory evoked mismatch negativity.

    PubMed

    Juckel, Georg; Roser, Patrik; Nadulski, Thomas; Stadelmann, Andreas M; Gallinat, Jürgen

    2007-12-01

    Reduced amplitudes of auditory evoked mismatch negativity (MMN) have often been found in schizophrenic patients, indicating deficient auditory information processing and working memory. Cannabis-induced psychotic states may resemble schizophrenia. Currently, there are discussions focusing on the close relationship between cannabis, the endocannabinoid and dopaminergic system, and the onset of schizophrenic psychosis. This study investigated the effects of cannabis on MMN amplitude in 22 healthy volunteers (age 28+/-6 years, 11 male) by comparing Delta(9)-tetrahydrocannabinol (Delta(9)-THC) and standardized cannabis extract containing Delta(9)-THC and cannabidiol (CBD) in a prospective, double-blind, placebo-controlled cross-over design. The MMNs resulting from 1000 auditory stimuli were recorded by 32 channel EEG. The standard stimuli were 1000 Hz, 80 dB SPL, and 100 ms duration. The deviant stimuli differed in frequency (1500 Hz). Significantly greater MMN amplitude values at central electrodes were found under cannabis extract, but not under Delta(9)-THC. There were no significant differences between MMN amplitudes at frontal electrodes. MMN amplitudes at central electrodes were significantly correlated with 11-OH-THC concentration, the most important psychoactive metabolite of Delta(9)-THC. Since the main difference between Delta(9)-THC and standardized cannabis extract is CBD, which seems to have neuroprotective and anti-psychotic properties, it can be speculated whether the greater MMN amplitude that may imply higher cortical activation and cognitive performance is related to the positive effects of CBD. This effect may be relevant for auditory cortex activity in particular because only MMN amplitudes at the central, but not at the frontal electrodes were enhanced under cannabis.

  6. Subcortical processing of speech regularities underlies reading and music aptitude in children.

    PubMed

    Strait, Dana L; Hornickel, Jane; Kraus, Nina

    2011-10-17

    Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input. Definition of common biological underpinnings for music and reading supports the usefulness of music for promoting child literacy, with the potential to improve reading remediation.

  7. A Computational Model of Semantic Memory Impairment: Modality- Specificity and Emergent Category-Specificity

    DTIC Science & Technology

    1991-09-01

    just one modality (e.g. visual or auditory agnosia ) or impaired manipulation of objects with specific uses, despite intact recognition of them (apraxia...Neurosurgery and itbiatzy, 51, 1201-1207. Farah, M. J. (1991) Patterns of co-occurence among the associative agnosias : Implications for visual object

  8. Sound-Making Actions Lead to Immediate Plastic Changes of Neuromagnetic Evoked Responses and Induced β-Band Oscillations during Perception.

    PubMed

    Ross, Bernhard; Barat, Masihullah; Fujioka, Takako

    2017-06-14

    Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as talking and singing or playing a musical instrument. Moreover, neural oscillations at β-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (7 female, 12 male) participated in three magnetoencephalographic recordings while first passively listening to recorded sounds of a bell ringing, then actively striking the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared with the initial naive listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of β-band oscillations, as well as θ coherence between auditory and sensorimotor cortices, was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a key press. We propose that P2 characterizes familiarity with sound objects, whereas β-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning. SIGNIFICANCE STATEMENT While suppression of auditory responses to self-generated sounds is well known, it is not clear whether the learned action-sound association modifies subsequent perception. Our study demonstrated the immediate effects of sound-making experience on perception using magnetoencephalographic recordings, as reflected in the increased auditory evoked P2 wave, increased responsiveness of β oscillations, and enhanced connectivity between auditory and sensorimotor cortices. The importance of motor learning was underscored as the changes were much smaller in a control group using a key press to generate the sounds instead of learning to play the musical instrument. The results support the rapid integration of a feedforward model during perception and provide a neurophysiological basis for the application of music making in motor rehabilitation training. Copyright © 2017 the authors 0270-6474/17/375948-12$15.00/0.

  9. Auditory-Motor Processing of Speech Sounds

    PubMed Central

    Möttönen, Riikka; Dutton, Rebekah; Watkins, Kate E.

    2013-01-01

    The motor regions that control movements of the articulators activate during listening to speech and contribute to performance in demanding speech recognition and discrimination tasks. Whether the articulatory motor cortex modulates auditory processing of speech sounds is unknown. Here, we aimed to determine whether the articulatory motor cortex affects the auditory mechanisms underlying discrimination of speech sounds in the absence of demanding speech tasks. Using electroencephalography, we recorded responses to changes in sound sequences, while participants watched a silent video. We also disrupted the lip or the hand representation in left motor cortex using transcranial magnetic stimulation. Disruption of the lip representation suppressed responses to changes in speech sounds, but not piano tones. In contrast, disruption of the hand representation had no effect on responses to changes in speech sounds. These findings show that disruptions within, but not outside, the articulatory motor cortex impair automatic auditory discrimination of speech sounds. The findings provide evidence for the importance of auditory-motor processes in efficient neural analysis of speech sounds. PMID:22581846

  10. Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH).

    PubMed

    Tierney, Adam; Kraus, Nina

    2014-01-01

    Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.

  11. Selective entrainment of brain oscillations drives auditory perceptual organization.

    PubMed

    Costa-Faidella, Jordi; Sussman, Elyse S; Escera, Carles

    2017-10-01

    Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Relating the variability of tone-burst otoacoustic emission and auditory brainstem response latencies to the underlying cochlear mechanics

    NASA Astrophysics Data System (ADS)

    Verhulst, Sarah; Shera, Christopher A.

    2015-12-01

    Forward and reverse cochlear latency and its relation to the frequency tuning of the auditory filters can be assessed using tone bursts (TBs). Otoacoustic emissions (TBOAEs) estimate the cochlear roundtrip time, while auditory brainstem responses (ABRs) to the same stimuli aim at measuring the auditory filter buildup time. Latency ratios are generally close to two and controversy exists about the relationship of this ratio to cochlear mechanics. We explored why the two methods provide different estimates of filter buildup time, and ratios with large inter-subject variability, using a time-domain model for OAEs and ABRs. We compared latencies for twenty models, in which all parameters but the cochlear irregularities responsible for reflection-source OAEs were identical, and found that TBOAE latencies were much more variable than ABR latencies. Multiple reflection-sources generated within the evoking stimulus bandwidth were found to shape the TBOAE envelope and complicate the interpretation of TBOAE latency and TBOAE/ABR ratios in terms of auditory filter tuning.

  13. Cortical Development and Neuroplasticity in Auditory Neuropathy Spectrum Disorder

    PubMed Central

    Sharma, Anu; Cardon, Garrett

    2015-01-01

    Cortical development is dependent to a large extent on stimulus-driven input. Auditory Neuropathy Spectrum Disorder (ANSD) is a recently described form of hearing impairment where neural dys-synchrony is the predominant characteristic. Children with ANSD provide a unique platform to examine the effects of asynchronous and degraded afferent stimulation on cortical auditory neuroplasticity and behavioral processing of sound. In this review, we describe patterns of auditory cortical maturation in children with ANSD. The disruption of cortical maturation that leads to these various patterns includes high levels of intra-individual cortical variability and deficits in cortical phase synchronization of oscillatory neural responses. These neurodevelopmental changes, which are constrained by sensitive periods for central auditory maturation, are correlated with behavioral outcomes for children with ANSD. Overall, we hypothesize that patterns of cortical development in children with ANSD appear to be markers of the severity of the underlying neural dys-synchrony, providing prognostic indicators of success of clinical intervention with amplification and/or electrical stimulation. PMID:26070426

  14. Activity in the left auditory cortex is associated with individual impulsivity in time discounting.

    PubMed

    Han, Ruokang; Takahashi, Taiki; Miyazaki, Akane; Kadoya, Tomoka; Kato, Shinya; Yokosawa, Koichi

    2015-01-01

    Impulsivity dictates individual decision-making behavior. Therefore, it can reflect consumption behavior and risk of addiction and thus underlies social activities as well. Neuroscience has been applied to explain social activities; however, the brain function controlling impulsivity has remained unclear. It is known that impulsivity is related to individual time perception, i.e., a person who perceives a certain physical time as being longer is impulsive. Here we show that activity of the left auditory cortex is related to individual impulsivity. Individual impulsivity was evaluated by a self-answered questionnaire in twelve healthy right-handed adults, and activities of the auditory cortices of bilateral hemispheres when listening to continuous tones were recorded by magnetoencephalography. Sustained activity of the left auditory cortex was significantly correlated to impulsivity, that is, larger sustained activity indicated stronger impulsivity. The results suggest that the left auditory cortex represent time perception, probably because the area is involved in speech perception, and that it represents impulsivity indirectly.

  15. Visual attention modulates brain activation to angry voices.

    PubMed

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  16. Matrix metalloproteinase-9 deletion rescues auditory evoked potential habituation deficit in a mouse model of Fragile X Syndrome.

    PubMed

    Lovelace, Jonathan W; Wen, Teresa H; Reinhard, Sarah; Hsu, Mike S; Sidhu, Harpreet; Ethell, Iryna M; Binder, Devin K; Razak, Khaleel A

    2016-05-01

    Sensory processing deficits are common in autism spectrum disorders, but the underlying mechanisms are unclear. Fragile X Syndrome (FXS) is a leading genetic cause of intellectual disability and autism. Electrophysiological responses in humans with FXS show reduced habituation with sound repetition and this deficit may underlie auditory hypersensitivity in FXS. Our previous study in Fmr1 knockout (KO) mice revealed an unusually long state of increased sound-driven excitability in auditory cortical neurons suggesting that cortical responses to repeated sounds may exhibit abnormal habituation as in humans with FXS. Here, we tested this prediction by comparing cortical event related potentials (ERP) recorded from wildtype (WT) and Fmr1 KO mice. We report a repetition-rate dependent reduction in habituation of N1 amplitude in Fmr1 KO mice and show that matrix metalloproteinase-9 (MMP-9), one of the known FMRP targets, contributes to the reduced ERP habituation. Our studies demonstrate a significant up-regulation of MMP-9 levels in the auditory cortex of adult Fmr1 KO mice, whereas a genetic deletion of Mmp-9 reverses ERP habituation deficits in Fmr1 KO mice. Although the N1 amplitude of Mmp-9/Fmr1 DKO recordings was larger than WT and KO recordings, the habituation of ERPs in Mmp-9/Fmr1 DKO mice is similar to WT mice implicating MMP-9 as a potential target for reversing sensory processing deficits in FXS. Together these data establish ERP habituation as a translation relevant, physiological pre-clinical marker of auditory processing deficits in FXS and suggest that abnormal MMP-9 regulation is a mechanism underlying auditory hypersensitivity in FXS. Fragile X Syndrome (FXS) is the leading known genetic cause of autism spectrum disorders. Individuals with FXS show symptoms of auditory hypersensitivity. These symptoms may arise due to sustained neural responses to repeated sounds, but the underlying mechanisms remain unclear. For the first time, this study shows deficits in habituation of neural responses to repeated sounds in the Fmr1 KO mice as seen in humans with FXS. We also report an abnormally high level of matrix metalloprotease-9 (MMP-9) in the auditory cortex of Fmr1 KO mice and that deletion of Mmp-9 from Fmr1 KO mice reverses habituation deficits. These data provide a translation relevant electrophysiological biomarker for sensory deficits in FXS and implicate MMP-9 as a target for drug discovery. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Musicians have enhanced audiovisual multisensory binding: experience-dependent effects in the double-flash illusion.

    PubMed

    Bidelman, Gavin M

    2016-10-01

    Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (<100 ms) were ~2-3× shorter than nonmusicians' (~200 ms), suggesting more refined multisensory integration and audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.

  18. Brain-computer interfaces using capacitive measurement of visual or auditory steady-state responses

    NASA Astrophysics Data System (ADS)

    Baek, Hyun Jae; Kim, Hyun Seok; Heo, Jeong; Lim, Yong Gyu; Park, Kwang Suk

    2013-04-01

    Objective. Brain-computer interface (BCI) technologies have been intensely studied to provide alternative communication tools entirely independent of neuromuscular activities. Current BCI technologies use electroencephalogram (EEG) acquisition methods that require unpleasant gel injections, impractical preparations and clean-up procedures. The next generation of BCI technologies requires practical, user-friendly, nonintrusive EEG platforms in order to facilitate the application of laboratory work in real-world settings. Approach. A capacitive electrode that does not require an electrolytic gel or direct electrode-scalp contact is a potential alternative to the conventional wet electrode in future BCI systems. We have proposed a new capacitive EEG electrode that contains a conductive polymer-sensing surface, which enhances electrode performance. This paper presents results from five subjects who exhibited visual or auditory steady-state responses according to BCI using these new capacitive electrodes. The steady-state visual evoked potential (SSVEP) spelling system and the auditory steady-state response (ASSR) binary decision system were employed. Main results. Offline tests demonstrated BCI performance high enough to be used in a BCI system (accuracy: 95.2%, ITR: 19.91 bpm for SSVEP BCI (6 s), accuracy: 82.6%, ITR: 1.48 bpm for ASSR BCI (14 s)) with the analysis time being slightly longer than that when wet electrodes were employed with the same BCI system (accuracy: 91.2%, ITR: 25.79 bpm for SSVEP BCI (4 s), accuracy: 81.3%, ITR: 1.57 bpm for ASSR BCI (12 s)). Subjects performed online BCI under the SSVEP paradigm in copy spelling mode and under the ASSR paradigm in selective attention mode with a mean information transfer rate (ITR) of 17.78 ± 2.08 and 0.7 ± 0.24 bpm, respectively. Significance. The results of these experiments demonstrate the feasibility of using our capacitive EEG electrode in BCI systems. This capacitive electrode may become a flexible and non-intrusive tool fit for various applications in the next generation of BCI technologies.

  19. Identification of a pathway for intelligible speech in the left temporal lobe

    PubMed Central

    Scott, Sophie K.; Blank, C. Catrin; Rosen, Stuart; Wise, Richard J. S.

    2017-01-01

    Summary It has been proposed that the identification of sounds, including species-specific vocalizations, by primates depends on anterior projections from the primary auditory cortex, an auditory pathway analogous to the ventral route proposed for the visual identification of objects. We have identified a similar route in the human for understanding intelligible speech. Using PET imaging to identify separable neural subsystems within the human auditory cortex, we used a variety of speech and speech-like stimuli with equivalent acoustic complexity but varying intelligibility. We have demonstrated that the left superior temporal sulcus responds to the presence of phonetic information, but its anterior part only responds if the stimulus is also intelligible. This novel observation demonstrates a left anterior temporal pathway for speech comprehension. PMID:11099443

  20. Are normally sighted, visually impaired, and blind pedestrians accurate and reliable at making street crossing decisions?

    PubMed

    Hassan, Shirin E

    2012-05-04

    The purpose of this study is to measure the accuracy and reliability of normally sighted, visually impaired, and blind pedestrians at making street crossing decisions using visual and/or auditory information. Using a 5-point rating scale, safety ratings for vehicular gaps of different durations were measured along a two-lane street of one-way traffic without a traffic signal. Safety ratings were collected from 12 normally sighted, 10 visually impaired, and 10 blind subjects for eight different gap times under three sensory conditions: (1) visual plus auditory information, (2) visual information only, and (3) auditory information only. Accuracy and reliability in street crossing decision-making were calculated for each subject under each sensory condition. We found that normally sighted and visually impaired pedestrians were accurate and reliable in their street crossing decision-making ability when using either vision plus hearing or vision only (P > 0.05). Under the hearing only condition, all subjects were reliable (P > 0.05) but inaccurate with their street crossing decisions (P < 0.05). Compared to either the normally sighted (P = 0.018) or visually impaired subjects (P = 0.019), blind subjects were the least accurate with their street crossing decisions under the hearing only condition. Our data suggested that visually impaired pedestrians can make accurate and reliable street crossing decisions like those of normally sighted pedestrians. When using auditory information only, all subjects significantly overestimated the vehicular gap time. Our finding that blind pedestrians performed significantly worse than either the normally sighted or visually impaired subjects under the hearing only condition suggested that they may benefit from training to improve their detection ability and/or interpretation of vehicular gap times.

  1. Are Normally Sighted, Visually Impaired, and Blind Pedestrians Accurate and Reliable at Making Street Crossing Decisions?

    PubMed Central

    Hassan, Shirin E.

    2012-01-01

    Purpose. The purpose of this study is to measure the accuracy and reliability of normally sighted, visually impaired, and blind pedestrians at making street crossing decisions using visual and/or auditory information. Methods. Using a 5-point rating scale, safety ratings for vehicular gaps of different durations were measured along a two-lane street of one-way traffic without a traffic signal. Safety ratings were collected from 12 normally sighted, 10 visually impaired, and 10 blind subjects for eight different gap times under three sensory conditions: (1) visual plus auditory information, (2) visual information only, and (3) auditory information only. Accuracy and reliability in street crossing decision-making were calculated for each subject under each sensory condition. Results. We found that normally sighted and visually impaired pedestrians were accurate and reliable in their street crossing decision-making ability when using either vision plus hearing or vision only (P > 0.05). Under the hearing only condition, all subjects were reliable (P > 0.05) but inaccurate with their street crossing decisions (P < 0.05). Compared to either the normally sighted (P = 0.018) or visually impaired subjects (P = 0.019), blind subjects were the least accurate with their street crossing decisions under the hearing only condition. Conclusions. Our data suggested that visually impaired pedestrians can make accurate and reliable street crossing decisions like those of normally sighted pedestrians. When using auditory information only, all subjects significantly overestimated the vehicular gap time. Our finding that blind pedestrians performed significantly worse than either the normally sighted or visually impaired subjects under the hearing only condition suggested that they may benefit from training to improve their detection ability and/or interpretation of vehicular gap times. PMID:22427593

  2. Effects of Long-Term Musical Training on Cortical Evoked Auditory Potentials

    PubMed Central

    Brown, Carolyn J.; Jeon, Eun-Kyung; Driscoll, Virginia; Mussoi, Bruna; Deshpande, Shruti Balvalli; Gfeller, Kate; Abbas, Paul

    2016-01-01

    Objective Evidence suggests that musicians, as a group, have superior frequency resolution abilities when compared to non-musicians. It is possible to assess auditory discrimination using either behavioral or electrophysiologic methods. The purpose of this study was to determine if the auditory change complex (ACC) is sensitive enough to reflect the differences in spectral processing exhibited by musicians and non-musicians. Design Twenty individuals (10 musicians and 10 non-musicians) participated in this study. Pitch and spectral ripple discrimination were assessed using both behavioral and electrophysiologic methods. Behavioral measures were obtained using a standard three interval, forced choice procedure and the ACC was recorded and used as an objective (i.e. non-behavioral) measure of discrimination between two auditory signals. The same stimuli were used for both psychophysical and electrophysiologic testing. Results As a group, musicians were able to detect smaller changes in pitch than non-musicians. They also were able to detect a shift in the position of the peaks and valleys in a ripple noise stimulus at higher ripple densities than non-musicians. ACC responses recorded from musicians were larger than those recorded from non-musicians when the amplitude of the ACC response was normalized to the amplitude of the onset response in each stimulus pair. Visual detection thresholds derived from the evoked potential data were better for musicians than non-musicians regardless of whether the task was discrimination of musical pitch or detection of a change in the frequency spectrum of the rippled noise stimuli. Behavioral measures of discrimination were generally more sensitive than the electrophysiologic measures; however, the two metrics were correlated. Conclusions Perhaps as a result of extensive training, musicians are better able to discriminate spectrally complex acoustic signals than non-musicians. Those differences are evident not only in perceptual/behavioral tests, but also in electrophysiologic measures of neural response at the level of the auditory cortex. While these results are based on observations made from normal hearing listeners, they suggest that the ACC may provide a non-behavioral method of assessing auditory discrimination and as a result might prove useful in future studies that explore the efficacy of participation in a musically based, auditory training program perhaps geared toward pediatric and/or hearing-impaired listeners. PMID:28225736

  3. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics.

    PubMed

    Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-08-01

    In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.

  4. Extending peripersonal space representation without tool-use: evidence from a combined behavioral-computational approach

    PubMed Central

    Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa

    2015-01-01

    Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is supported by computational and behavioral data. PMID:25698947

  5. Extending peripersonal space representation without tool-use: evidence from a combined behavioral-computational approach.

    PubMed

    Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa

    2015-01-01

    Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is supported by computational and behavioral data.

  6. Frequency-specific adaptation and its underlying circuit model in the auditory midbrain.

    PubMed

    Shen, Li; Zhao, Lingyun; Hong, Bo

    2015-01-01

    Receptive fields of sensory neurons are considered to be dynamic and depend on the stimulus history. In the auditory system, evidence of dynamic frequency-receptive fields has been found following stimulus-specific adaptation (SSA). However, the underlying mechanism and circuitry of SSA have not been fully elucidated. Here, we studied how frequency-receptive fields of neurons in rat inferior colliculus (IC) changed when exposed to a biased tone sequence. Pure tone with one specific frequency (adaptor) was presented markedly more often than others. The adapted tuning was compared with the original tuning measured with an unbiased sequence. We found inhomogeneous changes in frequency tuning in IC, exhibiting a center-surround pattern with respect to the neuron's best frequency. Central adaptors elicited strong suppressive and repulsive changes while flank adaptors induced facilitative and attractive changes. Moreover, we proposed a two-layer model of the underlying network, which not only reproduced the adaptive changes in the receptive fields but also predicted novelty responses to oddball sequences. These results suggest that frequency-specific adaptation in auditory midbrain can be accounted for by an adapted frequency channel and its lateral spreading of adaptation, which shed light on the organization of the underlying circuitry.

  7. Frequency-specific adaptation and its underlying circuit model in the auditory midbrain

    PubMed Central

    Shen, Li; Zhao, Lingyun; Hong, Bo

    2015-01-01

    Receptive fields of sensory neurons are considered to be dynamic and depend on the stimulus history. In the auditory system, evidence of dynamic frequency-receptive fields has been found following stimulus-specific adaptation (SSA). However, the underlying mechanism and circuitry of SSA have not been fully elucidated. Here, we studied how frequency-receptive fields of neurons in rat inferior colliculus (IC) changed when exposed to a biased tone sequence. Pure tone with one specific frequency (adaptor) was presented markedly more often than others. The adapted tuning was compared with the original tuning measured with an unbiased sequence. We found inhomogeneous changes in frequency tuning in IC, exhibiting a center-surround pattern with respect to the neuron's best frequency. Central adaptors elicited strong suppressive and repulsive changes while flank adaptors induced facilitative and attractive changes. Moreover, we proposed a two-layer model of the underlying network, which not only reproduced the adaptive changes in the receptive fields but also predicted novelty responses to oddball sequences. These results suggest that frequency-specific adaptation in auditory midbrain can be accounted for by an adapted frequency channel and its lateral spreading of adaptation, which shed light on the organization of the underlying circuitry. PMID:26483641

  8. Speech Auditory Alerts Promote Memory for Alerted Events in a Video-Simulated Self-Driving Car Ride.

    PubMed

    Nees, Michael A; Helbein, Benji; Porter, Anna

    2016-05-01

    Auditory displays could be essential to helping drivers maintain situation awareness in autonomous vehicles, but to date, few or no studies have examined the effectiveness of different types of auditory displays for this application scenario. Recent advances in the development of autonomous vehicles (i.e., self-driving cars) have suggested that widespread automation of driving may be tenable in the near future. Drivers may be required to monitor the status of automation programs and vehicle conditions as they engage in secondary leisure or work tasks (entertainment, communication, etc.) in autonomous vehicles. An experiment compared memory for alerted events-a component of Level 1 situation awareness-using speech alerts, auditory icons, and a visual control condition during a video-simulated self-driving car ride with a visual secondary task. The alerts gave information about the vehicle's operating status and the driving scenario. Speech alerts resulted in better memory for alerted events. Both auditory display types resulted in less perceived effort devoted toward the study tasks but also greater perceived annoyance with the alerts. Speech auditory displays promoted Level 1 situation awareness during a simulation of a ride in a self-driving vehicle under routine conditions, but annoyance remains a concern with auditory displays. Speech auditory displays showed promise as a means of increasing Level 1 situation awareness of routine scenarios during an autonomous vehicle ride with an unrelated secondary task. © 2016, Human Factors and Ergonomics Society.

  9. Taking Attention Away from the Auditory Modality: Context-dependent Effects on Early Sensory Encoding of Speech.

    PubMed

    Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath

    2018-05-24

    Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  10. Neural basis of processing threatening voices in a crowded auditory world

    PubMed Central

    Mothes-Lasch, Martin; Becker, Michael P. I.; Miltner, Wolfgang H. R.

    2016-01-01

    In real world situations, we typically listen to voice prosody against a background crowded with auditory stimuli. Voices and background can both contain behaviorally relevant features and both can be selectively in the focus of attention. Adequate responses to threat-related voices under such conditions require that the brain unmixes reciprocally masked features depending on variable cognitive resources. It is unknown which brain systems instantiate the extraction of behaviorally relevant prosodic features under varying combinations of prosody valence, auditory background complexity and attentional focus. Here, we used event-related functional magnetic resonance imaging to investigate the effects of high background sound complexity and attentional focus on brain activation to angry and neutral prosody in humans. Results show that prosody effects in mid superior temporal cortex were gated by background complexity but not attention, while prosody effects in the amygdala and anterior superior temporal cortex were gated by attention but not background complexity, suggesting distinct emotional prosody processing limitations in different regions. Crucially, if attention was focused on the highly complex background, the differential processing of emotional prosody was prevented in all brain regions, suggesting that in a distracting, complex auditory world even threatening voices may go unnoticed. PMID:26884543

  11. Genetic pleiotropy explains associations between musical auditory discrimination and intelligence.

    PubMed

    Mosing, Miriam A; Pedersen, Nancy L; Madison, Guy; Ullén, Fredrik

    2014-01-01

    Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions.

  12. Genetic Pleiotropy Explains Associations between Musical Auditory Discrimination and Intelligence

    PubMed Central

    Mosing, Miriam A.; Pedersen, Nancy L.; Madison, Guy; Ullén, Fredrik

    2014-01-01

    Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions. PMID:25419664

  13. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    PubMed

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Cortical processes of speech illusions in the general population.

    PubMed

    Schepers, E; Bodar, L; van Os, J; Lousberg, R

    2016-10-18

    There is evidence that experimentally elicited auditory illusions in the general population index risk for psychotic symptoms. As little is known about underlying cortical mechanisms of auditory illusions, an experiment was conducted to analyze processing of auditory illusions in a general population sample. In a follow-up design with two measurement moments (baseline and 6 months), participants (n = 83) underwent the White Noise task under simultaneous recording with a 14-lead EEG. An auditory illusion was defined as hearing any speech in a sound fragment containing white noise. A total number of 256 speech illusions (SI) were observed over the two measurements, with a high degree of stability of SI over time. There were 7 main effects of speech illusion on the EEG alpha band-the most significant indicating a decrease in activity at T3 (t = -4.05). Other EEG frequency bands (slow beta, fast beta, gamma, delta, theta) showed no significant associations with SI. SIs are characterized by reduced alpha activity in non-clinical populations. Given the association of SIs with psychosis, follow-up research is required to examine the possibility of reduced alpha activity mediating SIs in high risk and symptomatic populations.

  15. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.

    PubMed

    Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  16. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training

    PubMed Central

    Bernstein, Lynne E.; Auer, Edward T.; Eberhardt, Silvio P.; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning. PMID:23515520

  17. Mechanisms of spectral and temporal integration in the mustached bat inferior colliculus

    PubMed Central

    Wenstrup, Jeffrey James; Nataraj, Kiran; Sanchez, Jason Tait

    2012-01-01

    This review describes mechanisms and circuitry underlying combination-sensitive response properties in the auditory brainstem and midbrain. Combination-sensitive neurons, performing a type of auditory spectro-temporal integration, respond to specific, properly timed combinations of spectral elements in vocal signals and other acoustic stimuli. While these neurons are known to occur in the auditory forebrain of many vertebrate species, the work described here establishes their origin in the auditory brainstem and midbrain. Focusing on the mustached bat, we review several major findings: (1) Combination-sensitive responses involve facilitatory interactions, inhibitory interactions, or both when activated by distinct spectral elements in complex sounds. (2) Combination-sensitive responses are created in distinct stages: inhibition arises mainly in lateral lemniscal nuclei of the auditory brainstem, while facilitation arises in the inferior colliculus (IC) of the midbrain. (3) Spectral integration underlying combination-sensitive responses requires a low-frequency input tuned well below a neuron's characteristic frequency (ChF). Low-ChF neurons in the auditory brainstem project to high-ChF regions in brainstem or IC to create combination sensitivity. (4) At their sites of origin, both facilitatory and inhibitory combination-sensitive interactions depend on glycinergic inputs and are eliminated by glycine receptor blockade. Surprisingly, facilitatory interactions in IC depend almost exclusively on glycinergic inputs and are largely independent of glutamatergic and GABAergic inputs. (5) The medial nucleus of the trapezoid body (MNTB), the lateral lemniscal nuclei, and the IC play critical roles in creating combination-sensitive responses. We propose that these mechanisms, based on work in the mustached bat, apply to a broad range of mammals and other vertebrates that depend on temporally sensitive integration of information across the audible spectrum. PMID:23109917

  18. Matrix metalloproteinase-9 deletion rescues auditory evoked potential habituation deficit in a mouse model of Fragile X Syndrome

    PubMed Central

    Lovelace, Jonathan W.; Wen, Teresa H.; Reinhard, Sarah; Hsu, Mike S.; Sidhu, Harpreet; Ethell, Iryna M.; Binder, Devin K.; Razak, Khaleel A.

    2016-01-01

    Sensory processing deficits are common in autism spectrum disorders, but the underlying mechanisms are unclear. Fragile X Syndrome (FXS) is a leading genetic cause of intellectual disability and autism. Electrophysiological responses in humans with FXS show reduced habituation with sound repetition and this deficit may underlie auditory hypersensitivity in FXS. Our previous study in Fmr1 knockout (KO) mice revealed an unusually long state of increased sound-driven excitability in auditory cortical neurons suggesting that cortical responses to repeated sounds may exhibit abnormal habituation as in humans with FXS. Here, we tested this prediction by comparing cortical event related potentials (ERP) recorded from wildtype (WT) and Fmr1 KO mice. We report a repetition-rate dependent reduction in habituation of N1 amplitude in Fmr1 KO mice and show that matrix metalloproteinase −9 (MMP-9), one of the known FMRP targets, contributes to the reduced ERP habituation. Our studies demonstrate a significant up-regulation of MMP-9 levels in the auditory cortex of adult Fmr1 KO mice, whereas a genetic deletion of Mmp-9 reverses ERP habituation deficits in Fmr1 KO mice. Although the N1 amplitude of Mmp-9/Fmr1 DKO recordings was larger than WT and KO recordings, the habituation of ERPs in Mmp-9/Fmr1 DKO mice is similar to WT mice implicating MMP-9 as a potential target for reversing sensory processing deficits in FXS. Together these data establish ERP habituation as a translation relevant, physiological pre-clinical marker of auditory processing deficits in FXS and suggest that abnormal MMP-9 regulation is a mechanism underlying auditory hypersensitivity in FXS. PMID:26850918

  19. Probing neural mechanisms underlying auditory stream segregation in humans by transcranial direct current stimulation (tDCS).

    PubMed

    Deike, Susann; Deliano, Matthias; Brechmann, André

    2016-10-01

    One hypothesis concerning the neural underpinnings of auditory streaming states that frequency tuning of tonotopically organized neurons in primary auditory fields in combination with physiological forward suppression is necessary for the separation of representations of high-frequency A and low-frequency B tones. The extent of spatial overlap between the tonotopic activations of A and B tones is thought to underlie the perceptual organization of streaming sequences into one coherent or two separate streams. The present study attempts to interfere with these mechanisms by transcranial direct current stimulation (tDCS) and to probe behavioral outcomes reflecting the perception of ABAB streaming sequences. We hypothesized that tDCS by modulating cortical excitability causes a change in the separateness of the representations of A and B tones, which leads to a change in the proportions of one-stream and two-stream percepts. To test this, 22 subjects were presented with ambiguous ABAB sequences of three different frequency separations (∆F) and had to decide on their current percept after receiving sham, anodal, or cathodal tDCS over the left auditory cortex. We could confirm our hypothesis at the most ambiguous ∆F condition of 6 semitones. For anodal compared with sham and cathodal stimulation, we found a significant decrease in the proportion of two-stream perception and an increase in the proportion of one-stream perception. The results demonstrate the feasibility of using tDCS to probe mechanisms underlying auditory streaming through the use of various behavioral measures. Moreover, this approach allows one to probe the functions of auditory regions and their interactions with other processing stages. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    PubMed

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Multisensory and Modality-Specific Influences on Adaptation to Optical Prisms

    PubMed Central

    Calzolari, Elena; Albini, Federica; Bolognini, Nadia; Vallar, Giuseppe

    2017-01-01

    Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target. Visual, auditory, and multisensory – audio-visual – targets in the adaptation phase were used, while participants wore prisms displacing the visual field rightward by 10°. Proprioceptive, visual, visual-proprioceptive, auditory-proprioceptive straight-ahead shifts were measured. Pointing to auditory and to audio-visual targets in the adaptation phase produces proprioceptive, visual-proprioceptive, and auditory-proprioceptive AEs, as the typical visual targets did. This finding reveals that cross-modal plasticity effects involve both the auditory and the visual modality, and their interactions (Experiment 1). Even a shortened PA phase, requiring only 24 pointings to visual and audio-visual targets (Experiment 2), is sufficient to bring about AEs, as compared to the standard 92-pointings procedure. Finally, pointings to auditory targets cause AEs, although PA with a reduced number of pointings (24) to auditory targets brings about smaller AEs, as compared to the 92-pointings procedure (Experiment 3). Together, results from the three experiments extend to the auditory modality the sensorimotor plasticity underlying the typical AEs produced by PA to visual targets. Importantly, PA to auditory targets appears characterized by less accurate pointings and error correction, suggesting that the auditory component of the PA process may be less central to the building up of the AEs, than the sensorimotor pointing activity per se. These findings highlight both the effectiveness of a reduced number of pointings for bringing about AEs, and the possibility of inducing PA with auditory targets, which may be used as a compensatory route in patients with visual deficits. PMID:29213233

  2. An investigation of the relation between sibilant production and somatosensory and auditory acuity

    PubMed Central

    Ghosh, Satrajit S.; Matthies, Melanie L.; Maas, Edwin; Hanson, Alexandra; Tiede, Mark; Ménard, Lucie; Guenther, Frank H.; Lane, Harlan; Perkell, Joseph S.

    2010-01-01

    The relation between auditory acuity, somatosensory acuity and the magnitude of produced sibilant contrast was investigated with data from 18 participants. To measure auditory acuity, stimuli from a synthetic sibilant continuum ([s]-[ʃ]) were used in a four-interval, two-alternative forced choice adaptive-staircase discrimination task. To measure somatosensory acuity, small plastic domes with grooves of different spacing were pressed against each participant’s tongue tip and the participant was asked to identify one of four possible orientations of the grooves. Sibilant contrast magnitudes were estimated from productions of the words ‘said,’ ‘shed,’ ‘sid,’ and ‘shid’. Multiple linear regression revealed a significant relation indicating that a combination of somatosensory and auditory acuity measures predicts produced acoustic contrast. When the participants were divided into high- and low-acuity groups based on their median somatosensory and auditory acuity measures, separate ANOVA analyses with sibilant contrast as the dependent variable yielded a significant main effect for each acuity group. These results provide evidence that sibilant productions have auditory as well as somatosensory goals and are consistent with prior results and the theoretical framework underlying the DIVA model of speech production. PMID:21110603

  3. Music training and working memory: an ERP study.

    PubMed

    George, Elyse M; Coch, Donna

    2011-04-01

    While previous research has suggested that music training is associated with improvements in various cognitive and linguistic skills, the mechanisms mediating or underlying these associations are mostly unknown. Here, we addressed the hypothesis that previous music training is related to improved working memory. Using event-related potentials (ERPs) and a standardized test of working memory, we investigated both neural and behavioral aspects of working memory in college-aged, non-professional musicians and non-musicians. Behaviorally, musicians outperformed non-musicians on standardized subtests of visual, phonological, and executive memory. ERPs were recorded in standard auditory and visual oddball paradigms (participants responded to infrequent deviant stimuli embedded in lists of standard stimuli). Electrophysiologically, musicians demonstrated faster updating of working memory (shorter latency P300s) in both the auditory and visual domains and musicians allocated more neural resources to auditory stimuli (larger amplitude P300), showing increased sensitivity to the auditory standard/deviant difference and less effortful updating of auditory working memory. These findings demonstrate that long-term music training is related to improvements in working memory, in both the auditory and visual domains and in terms of both behavioral and ERP measures. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Voluntary movement affects simultaneous perception of auditory and tactile stimuli presented to a non-moving body part.

    PubMed

    Hao, Qiao; Ora, Hiroki; Ogawa, Ken-Ichiro; Ogata, Taiki; Miyake, Yoshihiro

    2016-09-13

    The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part.

  5. Demonstrations of simple and complex auditory psychophysics for multiple platforms and environments

    NASA Astrophysics Data System (ADS)

    Horowitz, Seth S.; Simmons, Andrea M.; Blue, China

    2005-09-01

    Sound is arguably the most widely perceived and pervasive form of energy in our world, and among the least understood, in part due to the complexity of its underlying principles. A series of interactive displays has been developed which demonstrates that the nature of sound involves the propagation of energy through space, and illustrates the definition of psychoacoustics, which is how listeners map the physical aspects of sound and vibration onto their brains. These displays use auditory illusions and commonly experienced music and sound in novel presentations (using interactive computer algorithms) to show that what you hear is not always what you get. The areas covered in these demonstrations range from simple and complex auditory localization, which illustrate why humans are bad at echolocation but excellent at determining the contents of auditory space, to auditory illusions that manipulate fine phase information and make the listener think their head is changing size. Another demonstration shows how auditory and visual localization coincide and sound can be used to change visual tracking. These demonstrations are designed to run on a wide variety of student accessible platforms including web pages, stand-alone presentations, or even hardware-based systems for museum displays.

  6. 47 CFR 14.21 - Performance Objectives.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... operate and use the product, including but not limited to, text, static or dynamic images, icons, labels.... (2) Connection point for external audio processing devices. Products providing auditory output shall...

  7. 47 CFR 14.21 - Performance Objectives.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... operate and use the product, including but not limited to, text, static or dynamic images, icons, labels.... (2) Connection point for external audio processing devices. Products providing auditory output shall...

  8. Echolocation system of the bottlenose dolphin

    NASA Astrophysics Data System (ADS)

    Dubrovsky, N. A.

    2004-05-01

    The hypothesis put forward by Vel’min and Dubrovsky [1] is discussed. The hypothesis suggests that bottlenose dolphins possess two functionally separate auditory subsystems: one of them serves for analyzing extraneous sounds, as in nonecholocating terrestrial animals, and the other performs the analysis of echoes caused by the echolocation clicks of the animal itself. The first subsystem is called passive hearing, and the second, active hearing. The results of experimental studies of dolphin’s echolocation system are discussed to confirm the proposed hypothesis. For the active hearing of dolphins, the notion of a critical interval is considered as the interval of time within which the formation of a merged auditory image of the echolocation object is formed when all echo highlights of the echo from this object fall within the critical interval.

  9. An investigation of leg and trunk strength and reaction times of hard-style martial arts practitioners.

    PubMed

    Donovan, Oliver O; Cheung, Jeanette; Catley, Maria; McGregor, Alison H; Strutton, Paul H

    2006-01-01

    The purpose of this study was to investigate trunk and knee strength in practitioners of hard-style martial arts. An additional objective was to examine reaction times in these participants by measuring simple reaction times (SRT), choice reaction times (CRT) and movement times (MT). Thirteen high-level martial artists and twelve sedentary participants were tested under isokinetic and isometric conditions on an isokinetic dynamometer. Response and movement times were also measured in response to simple and choice auditory cues. Results indicated that the martial arts group generated a greater body-weight adjusted peak torque with both legs at all speeds during isokinetic extension and flexion, and in isometric extension but not flexion. In isokinetic and isometric trunk flexion and extension, martial artists tended to have higher peak torques than controls, but they were not significantly different (p > 0.05). During the SRT and CRT tasks the martial artists were no quicker in lifting their hand off a button in response to the stimulus [reaction time (RT)] but were significantly faster in moving to press another button [movement time (MT)]. In conclusion, the results reveal that training in a martial art increases the strength of both the flexors and extensors of the leg. Furthermore, they have faster movement times to auditory stimuli. These results are consistent with the physical aspects of the martial arts. Key PointsMartial artists undertaking hard-style martial arts have greater strength in their knee flexor and extensor muscles as tested under isokinetic testing. Under isometric testing conditions they have stronger knee extensors only.The trunk musculature is generally higher under both conditions of testing in the martial artists, although not significantly.The total reaction times of the martial artists to an auditory stimulus were significantly faster than the control participants. When analysed further it was revealed that the decrease in reaction time was due to the movement time component of the total reaction time.The training involved for the practice of the hard-style martial arts increases the strength of muscles involved in kicking. This increased strength is not seen in the trunk muscles. Furthermore, martial artists have a faster response time; the cause of which appears to be only the faster movement time.

  10. An Investigation Of Leg And Trunk Strength And Reaction Times Of Hard-Style Martial Arts Practitioners

    PubMed Central

    Donovan, Oliver O; Cheung, Jeanette; Catley, Maria; McGregor, Alison H.; Strutton, Paul H.

    2006-01-01

    The purpose of this study was to investigate trunk and knee strength in practitioners of hard-style martial arts. An additional objective was to examine reaction times in these participants by measuring simple reaction times (SRT), choice reaction times (CRT) and movement times (MT). Thirteen high-level martial artists and twelve sedentary participants were tested under isokinetic and isometric conditions on an isokinetic dynamometer. Response and movement times were also measured in response to simple and choice auditory cues. Results indicated that the martial arts group generated a greater body-weight adjusted peak torque with both legs at all speeds during isokinetic extension and flexion, and in isometric extension but not flexion. In isokinetic and isometric trunk flexion and extension, martial artists tended to have higher peak torques than controls, but they were not significantly different (p > 0.05). During the SRT and CRT tasks the martial artists were no quicker in lifting their hand off a button in response to the stimulus [reaction time (RT)] but were significantly faster in moving to press another button [movement time (MT)]. In conclusion, the results reveal that training in a martial art increases the strength of both the flexors and extensors of the leg. Furthermore, they have faster movement times to auditory stimuli. These results are consistent with the physical aspects of the martial arts. Key Points Martial artists undertaking hard-style martial arts have greater strength in their knee flexor and extensor muscles as tested under isokinetic testing. Under isometric testing conditions they have stronger knee extensors only. The trunk musculature is generally higher under both conditions of testing in the martial artists, although not significantly. The total reaction times of the martial artists to an auditory stimulus were significantly faster than the control participants. When analysed further it was revealed that the decrease in reaction time was due to the movement time component of the total reaction time. The training involved for the practice of the hard-style martial arts increases the strength of muscles involved in kicking. This increased strength is not seen in the trunk muscles. Furthermore, martial artists have a faster response time; the cause of which appears to be only the faster movement time. PMID:24376366

  11. Reconstruction of audio waveforms from spike trains of artificial cochlea models

    PubMed Central

    Zai, Anja T.; Bhargava, Saurabh; Mesgarani, Nima; Liu, Shih-Chii

    2015-01-01

    Spiking cochlea models describe the analog processing and spike generation process within the biological cochlea. Reconstructing the audio input from the artificial cochlea spikes is therefore useful for understanding the fidelity of the information preserved in the spikes. The reconstruction process is challenging particularly for spikes from the mixed signal (analog/digital) integrated circuit (IC) cochleas because of multiple non-linearities in the model and the additional variance caused by random transistor mismatch. This work proposes an offline method for reconstructing the audio input from spike responses of both a particular spike-based hardware model called the AEREAR2 cochlea and an equivalent software cochlea model. This method was previously used to reconstruct the auditory stimulus based on the peri-stimulus histogram of spike responses recorded in the ferret auditory cortex. The reconstructed audio from the hardware cochlea is evaluated against an analogous software model using objective measures of speech quality and intelligibility; and further tested in a word recognition task. The reconstructed audio under low signal-to-noise (SNR) conditions (SNR < –5 dB) gives a better classification performance than the original SNR input in this word recognition task. PMID:26528113

  12. Allocation of Attentional Resources toward a Secondary Cognitive Task Leads to Compromised Ankle Proprioceptive Performance in Healthy Young Adults

    PubMed Central

    Yasuda, Kazuhiro; Iimura, Naoyuki; Iwata, Hiroyasu

    2014-01-01

    The objective of the present study was to determine whether increased attentional demands influence the assessment of ankle joint proprioceptive ability in young adults. We used a dual-task condition, in which participants performed an ankle ipsilateral position-matching task with and without a secondary serial auditory subtraction task during target angle encoding. Two experiments were performed with two different cohorts: one in which the auditory subtraction task was easy (experiment 1a) and one in which it was difficult (experiment 1b). The results showed that, compared with the single-task condition, participants had higher absolute error under dual-task conditions in experiment 1b. The reduction in position-matching accuracy with an attentionally demanding cognitive task suggests that allocation of attentional resources toward a difficult second task can lead to compromised ankle proprioceptive performance. Therefore, these findings indicate that the difficulty level of the cognitive task might be the possible critical factor that decreased accuracy of position-matching task. We conclude that increased attentional demand with difficult cognitive task does influence the assessment of ankle joint proprioceptive ability in young adults when measured using an ankle ipsilateral position-matching task. PMID:24523966

  13. Effect of a concurrent auditory task on visual search performance in a driving-related image-flicker task.

    PubMed

    Richard, Christian M; Wright, Richard D; Ee, Cheryl; Prime, Steven L; Shimizu, Yujiro; Vavrik, John

    2002-01-01

    The effect of a concurrent auditory task on visual search was investigated using an image-flicker technique. Participants were undergraduate university students with normal or corrected-to-normal vision who searched for changes in images of driving scenes that involved either driving-related (e.g., traffic light) or driving-unrelated (e.g., mailbox) scene elements. The results indicated that response times were significantly slower if the search was accompanied by a concurrent auditory task. In addition, slower overall responses to scenes involving driving-unrelated changes suggest that the underlying process affected by the concurrent auditory task is strategic in nature. These results were interpreted in terms of their implications for using a cellular telephone while driving. Actual or potential applications of this research include the development of safer in-vehicle communication devices.

  14. Psycho-physiological assessment of a prosthetic hand sensory feedback system based on an auditory display: a preliminary study.

    PubMed

    Gonzalez, Jose; Soma, Hirokazu; Sekine, Masashi; Yu, Wenwei

    2012-06-09

    Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues) have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user's mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. 10 male subjects (26+/-years old), participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF), Visual Feedback only control (VF), and Audiovisual Feedback control (AVF). For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject's EEG, ECG, electro-dermal activity (EDA), and respiration rate were measured. The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback). Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. The performance improvements when using auditory cues, along with vision (multimodal feedback), can be attributed to a reduced attentional demand during the task, which can be attributed to a visual "pop-out" or enhance effect. Also, the NASA TLX, the EEG's Alpha and Beta band, and the Heart Rate could be used to further evaluate sensory feedback systems in prosthetic applications.

  15. Sex differences present in auditory looming perception, absent in auditory recession

    NASA Astrophysics Data System (ADS)

    Neuhoff, John G.; Seifritz, Erich

    2005-04-01

    When predicting the arrival time of an approaching sound source, listeners typically exhibit an anticipatory bias that affords a margin of safety in dealing with looming objects. The looming bias has been demonstrated behaviorally in the laboratory and in the field (Neuhoff 1998, 2001), neurally in fMRI studies (Seifritz et al., 2002), and comparatively in non-human primates (Ghazanfar, Neuhoff, and Logothetis, 2002). In the current work, male and female listeners were presented with three-dimensional looming sound sources and asked to press a button when the source was at the point of closest approach. Females exhibited a significantly greater anticipatory bias than males. Next, listeners were presented with sounds that either approached or receded and then stopped at three different terminal distances. Consistent with the time-to-arrival judgments, female terminal distance judgments for looming sources were significantly closer than male judgments. However, there was no difference between male and female terminal distance judgments for receding sounds. Taken together with the converging behavioral, neural, and comparative evidence, the current results illustrate the environmental salience of looming sounds and suggest that the anticipatory bias for auditory looming may have been shaped by evolution to provide a selective advantage in dealing with looming objects.

  16. Cue-recruitment for extrinsic signals after training with low information stimuli.

    PubMed

    Jain, Anshul; Fuller, Stuart; Backus, Benjamin T

    2014-01-01

    Cue-recruitment occurs when a previously ineffective signal comes to affect the perceptual appearance of a target object, in a manner similar to the trusted cues with which the signal was put into correlation during training. Jain, Fuller and Backus reported that extrinsic signals, those not carried by the target object itself, were not recruited even after extensive training. However, recent studies have shown that training using weakened trusted cues can facilitate recruitment of intrinsic signals. The current study was designed to examine whether extrinsic signals can be recruited by putting them in correlation with weakened trusted cues. Specifically, we tested whether an extrinsic visual signal, the rotary motion direction of an annulus of random dots, and an extrinsic auditory signal, direction of an auditory pitch glide, can be recruited as cues for the rotation direction of a Necker cube. We found learning, albeit weak, for visual but not for auditory signals. These results extend the generality of the cue-recruitment phenomenon to an extrinsic signal and provide further evidence that the visual system learns to use new signals most quickly when other, long-trusted cues are unavailable or unreliable.

  17. Development of a method for estimating oesophageal temperature by multi-locational temperature measurement inside the external auditory canal

    NASA Astrophysics Data System (ADS)

    Nakada, Hirofumi; Horie, Seichi; Kawanami, Shoko; Inoue, Jinro; Iijima, Yoshinori; Sato, Kiyoharu; Abe, Takeshi

    2017-09-01

    We aimed to develop a practical method to estimate oesophageal temperature by measuring multi-locational auditory canal temperatures. This method can be applied to prevent heatstroke by simultaneously and continuously monitoring the core temperatures of people working under hot environments. We asked 11 healthy male volunteers to exercise, generating 80 W for 45 min in a climatic chamber set at 24, 32 and 40 °C, at 50% relative humidity. We also exposed the participants to radiation at 32 °C. We continuously measured temperatures at the oesophagus, rectum and three different locations along the external auditory canal. We developed equations for estimating oesophageal temperatures from auditory canal temperatures and compared their fitness and errors. The rectal temperature increased or decreased faster than oesophageal temperature at the start or end of exercise in all conditions. Estimated temperature showed good similarity with oesophageal temperature, and the square of the correlation coefficient of the best fitting model reached 0.904. We observed intermediate values between rectal and oesophageal temperatures during the rest phase. Even under the condition with radiation, estimated oesophageal temperature demonstrated concordant movement with oesophageal temperature at around 0.1 °C overestimation. Our method measured temperatures at three different locations along the external auditory canal. We confirmed that the approach can credibly estimate the oesophageal temperature from 24 to 40 °C for people performing exercise in the same place in a windless environment.

  18. Tone Series and the Nature of Working Memory Capacity Development

    ERIC Educational Resources Information Center

    Clark, Katherine M.; Hardman, Kyle O.; Schachtman, Todd R.; Saults, J. Scott; Glass, Bret A.; Cowan, Nelson

    2018-01-01

    Recent advances in understanding visual working memory, the limited information held in mind for use in ongoing processing, are extended here to examine auditory working memory development. Research with arrays of visual objects has shown how to distinguish the capacity, in terms of the "number" of objects retained, from the…

  19. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS)

    PubMed Central

    Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul

    2016-01-01

    Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception. PMID:27042360

  20. Atypical vertical sound localization and sound-onset sensitivity in people with autism spectrum disorders.

    PubMed

    Visser, Eelke; Zwiers, Marcel P; Kan, Cornelis C; Hoekstra, Liesbeth; van Opstal, A John; Buitelaar, Jan K

    2013-11-01

    Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs.

  1. Frontal top-down signals increase coupling of auditory low-frequency oscillations to continuous speech in human listeners.

    PubMed

    Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim

    2015-06-15

    Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Perceptual consequences of disrupted auditory nerve activity.

    PubMed

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique contribution of neural synchrony to sensory perception but also provide guidance for translational research in terms of better diagnosis and management of human communication disorders.

  3. Frontal Top-Down Signals Increase Coupling of Auditory Low-Frequency Oscillations to Continuous Speech in Human Listeners

    PubMed Central

    Park, Hyojin; Ince, Robin A.A.; Schyns, Philippe G.; Thut, Gregor; Gross, Joachim

    2015-01-01

    Summary Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. PMID:26028433

  4. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors.

    PubMed

    Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun

    2015-08-01

    Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  5. The role of the speech-language pathologist in identifying and treating children with auditory processing disorder.

    PubMed

    Richard, Gail J

    2011-07-01

    A summary of issues regarding auditory processing disorder (APD) is presented, including some of the remaining questions and challenges raised by the articles included in the clinical forum. Evolution of APD as a diagnostic entity within audiology and speech-language pathology is reviewed. A summary of treatment efficacy results and issues is provided, as well as the continuing dilemma for speech-language pathologists (SLPs) charged with providing treatment for referred APD clients. The role of the SLP in diagnosing and treating APD remains under discussion, despite lack of efficacy data supporting auditory intervention and questions regarding the clinical relevance and validity of APD.

  6. The mismatch negativity as a measure of auditory stream segregation in a simulated "cocktail-party" scenario: effect of age.

    PubMed

    Getzmann, Stephan; Näätänen, Risto

    2015-11-01

    With age the ability to understand speech in multitalker environments usually deteriorates. The central auditory system has to perceptually segregate and group the acoustic input into sequences of distinct auditory objects. The present study used electrophysiological measures to study effects of age on auditory stream segregation in a multitalker scenario. Younger and older adults were presented with streams of short speech stimuli. When a single target stream was presented, the occurrence of a rare (deviant) syllable among a frequent (standard) syllable elicited the mismatch negativity (MMN), an electrophysiological correlate of automatic deviance detection. The presence of a second, concurrent stream consisting of the deviant syllable of the target stream reduced the MMN amplitude, especially when located nearby the target stream. The decrease in MMN amplitude indicates that the rare syllable of the target stream was less perceived as deviant, suggesting reduced stream segregation with decreasing stream distance. Moreover, the presence of a concurrent stream increased the MMN peak latency of the older group but not that of the younger group. The results provide neurophysiological evidence for the effects of concurrent speech on auditory processing in older adults, suggesting that older adults need more time for stream segregation in the presence of concurrent speech. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Neurons and Objects: The Case of Auditory Cortex

    PubMed Central

    Nelken, Israel; Bar-Yosef, Omer

    2008-01-01

    Sounds are encoded into electrical activity in the inner ear, where they are represented (roughly) as patterns of energy in narrow frequency bands. However, sounds are perceived in terms of their high-order properties. It is generally believed that this transformation is performed along the auditory hierarchy, with low-level physical cues computed at early stages of the auditory system and high-level abstract qualities at high-order cortical areas. The functional position of primary auditory cortex (A1) in this scheme is unclear – is it ‘early’, encoding physical cues, or is it ‘late’, already encoding abstract qualities? Here we argue that neurons in cat A1 show sensitivity to high-level features of sounds. In particular, these neurons may already show sensitivity to ‘auditory objects’. The evidence for this claim comes from studies in which individual sounds are presented singly and in mixtures. Many neurons in cat A1 respond to mixtures in the same way they respond to one of the individual components of the mixture, and in many cases neurons may respond to a low-level component of the mixture rather than to the acoustically dominant one, even though the same neurons respond to the acoustically-dominant component when presented alone. PMID:18982113

  8. P50 suppression in children with selective mutism: a preliminary report.

    PubMed

    Henkin, Yael; Feinholz, Maya; Arie, Miri; Bar-Haim, Yair

    2010-01-01

    Evidence suggests that children with selective mutism (SM) display significant aberrations in auditory efferent activity at the brainstem level that may underlie inefficient auditory processing during vocalization, and lead to speech avoidance. The objective of the present study was to explore auditory filtering processes at the cortical level in children with SM. The classic paired-click paradigm was utilized to assess suppression of the P50 event-related potential to the second, of two sequentially-presented clicks, in ten children with SM and 10 control children. A significant suppression of P50 to the second click was evident in the SM group, whereas no suppression effect was observed in controls. Suppression was evident in 90% of the SM group and in 40% of controls, whereas augmentation was found in 10% and 60%, respectively, yielding a significant association between group and suppression of P50. P50 to the first click was comparable in children with SM and controls. The adult-like, mature P50 suppression effect exhibited by children with SM may reflect a cortical mechanism of compensatory inhibition of irrelevant repetitive information that was not properly suppressed at lower levels of their auditory system. The current data extends our previous findings suggesting that differential auditory processing may be involved in speech selectivity in SM.

  9. Auditory training of speech recognition with interrupted and continuous noise maskers by children with hearing impairment

    PubMed Central

    Sullivan, Jessica R.; Thibodeau, Linda M.; Assmann, Peter F.

    2013-01-01

    Previous studies have indicated that individuals with normal hearing (NH) experience a perceptual advantage for speech recognition in interrupted noise compared to continuous noise. In contrast, adults with hearing impairment (HI) and younger children with NH receive a minimal benefit. The objective of this investigation was to assess whether auditory training in interrupted noise would improve speech recognition in noise for children with HI and perhaps enhance their utilization of glimpsing skills. A partially-repeated measures design was used to evaluate the effectiveness of seven 1-h sessions of auditory training in interrupted and continuous noise. Speech recognition scores in interrupted and continuous noise were obtained from pre-, post-, and 3 months post-training from 24 children with moderate-to-severe hearing loss. Children who participated in auditory training in interrupted noise demonstrated a significantly greater improvement in speech recognition compared to those who trained in continuous noise. Those who trained in interrupted noise demonstrated similar improvements in both noise conditions while those who trained in continuous noise only showed modest improvements in the interrupted noise condition. This study presents direct evidence that auditory training in interrupted noise can be beneficial in improving speech recognition in noise for children with HI. PMID:23297921

  10. Relation between measures of speech-in-noise performance and measures of efferent activity

    NASA Astrophysics Data System (ADS)

    Smith, Brad; Harkrider, Ashley; Burchfield, Samuel; Nabelek, Anna

    2003-04-01

    Individual differences in auditory perceptual abilities in noise are well documented but the factors causing such variability are unclear. The purpose of this study was to determine if individual differences in responses measured from the auditory efferent system were correlated to individual variations in speech-in-noise performance. The relation between behavioral performance on three speech-in-noise tasks and two objective measures of the efferent auditory system were examined in thirty normal-hearing, young adults. Two of the speech-in-noise tasks measured an acceptable noise level, the maximum level of speech-babble noise that a subject is willing to accept while listening to a story. For these, the acceptable noise level was evaluated using both an ipsilateral (story and noise in same ear) and a contralateral (story and noise in opposite ears) paradigm. The third speech-in-noise task evaluated speech recognition using monosyllabic words presented in competing speech babble. Auditory efferent activity was assessed by examining the resulting suppression of click-evoked otoacoustic emissions following the introduction of a contralateral, broad-band stimulus and the activity of the ipsilateral and contralateral acoustic reflex arc was evaluated using tones and broad-band noise. Results will be discussed relative to current theories of speech in noise performance and auditory inhibitory processes.

  11. Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation

    PubMed Central

    Oliva, Aude

    2017-01-01

    Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630

  12. Auditory steady-state response in cochlear implant patients.

    PubMed

    Torres-Fortuny, Alejandro; Arnaiz-Marquez, Isabel; Hernández-Pérez, Heivet; Eimil-Suárez, Eduardo

    2018-03-19

    Auditory steady state responses to continuous amplitude modulated tones at rates between 70 and 110Hz, have been proposed as a feasible alternative to objective frequency specific audiometry in cochlear implant subjects. The aim of the present study is to obtain physiological thresholds by means of auditory steady-state response in cochlear implant patients (Clarion HiRes 90K), with acoustic stimulation, on free field conditions and to verify its biological origin. 11 subjects comprised the sample. Four amplitude modulated tones of 500, 1000, 2000 and 4000Hz were used as stimuli, using the multiple frequency technique. The recording of auditory steady-state response was also recorded at 0dB HL of intensity, non-specific stimulus and using a masking technique. The study enabled the electrophysiological thresholds to be obtained for each subject of the explored sample. There were no auditory steady-state responses at either 0dB or non-specific stimulus recordings. It was possible to obtain the masking thresholds. A difference was identified between behavioral and electrophysiological thresholds of -6±16, -2±13, 0±22 and -8±18dB at frequencies of 500, 1000, 2000 and 4000Hz respectively. The auditory steady state response seems to be a suitable technique to evaluate the hearing threshold in cochlear implant subjects. Copyright © 2018 Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. Publicado por Elsevier España, S.L.U. All rights reserved.

  13. Audio-vocal system regulation in children with autism spectrum disorders.

    PubMed

    Russo, Nicole; Larson, Charles; Kraus, Nina

    2008-06-01

    Do children with autism spectrum disorders (ASD) respond similarly to perturbations in auditory feedback as typically developing (TD) children? Presentation of pitch-shifted voice auditory feedback to vocalizing participants reveals a close coupling between the processing of auditory feedback and vocal motor control. This paradigm was used to test the hypothesis that abnormalities in the audio-vocal system would negatively impact ASD compensatory responses to perturbed auditory feedback. Voice fundamental frequency (F(0)) was measured while children produced an /a/ sound into a microphone. The voice signal was fed back to the subjects in real time through headphones. During production, the feedback was pitch shifted (-100 cents, 200 ms) at random intervals for 80 trials. Averaged voice F(0) responses to pitch-shifted stimuli were calculated and correlated with both mental and language abilities as tested via standardized tests. A subset of children with ASD produced larger responses to perturbed auditory feedback than TD children, while the other children with ASD produced significantly lower response magnitudes. Furthermore, robust relationships between language ability, response magnitude and time of peak magnitude were identified. Because auditory feedback helps to stabilize voice F(0) (a major acoustic cue of prosody) and individuals with ASD have problems with prosody, this study identified potential mechanisms of dysfunction in the audio-vocal system for voice pitch regulation in some children with ASD. Objectively quantifying this deficit may inform both the assessment of a subgroup of ASD children with prosody deficits, as well as remediation strategies that incorporate pitch training.

  14. Sight and sound converge to form modality-invariant representations in temporo-parietal cortex

    PubMed Central

    Man, Kingson; Kaplan, Jonas T.; Damasio, Antonio; Meyer, Kaspar

    2013-01-01

    People can identify objects in the environment with remarkable accuracy, irrespective of the sensory modality they use to perceive them. This suggests that information from different sensory channels converges somewhere in the brain to form modality-invariant representations, i.e., representations that reflect an object independently of the modality through which it has been apprehended. In this functional magnetic resonance imaging study of human subjects, we first identified brain areas that responded to both visual and auditory stimuli and then used crossmodal multivariate pattern analysis to evaluate the neural representations in these regions for content-specificity (i.e., do different objects evoke different representations?) and modality-invariance (i.e., do the sight and the sound of the same object evoke a similar representation?). While several areas became activated in response to both auditory and visual stimulation, only the neural patterns recorded in a region around the posterior part of the superior temporal sulcus displayed both content-specificity and modality-invariance. This region thus appears to play an important role in our ability to recognize objects in our surroundings through multiple sensory channels and to process them at a supra-modal (i.e., conceptual) level. PMID:23175818

  15. Cross-modal interactions during perception of audiovisual speech and nonspeech signals: an fMRI study.

    PubMed

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2011-01-01

    During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. Journal of Cognitive Neuroscience, 21, 259-274, 2009]. Using functional magnetic resonance imaging, the present follow-up study aims to further elucidate the topographic distribution of visual-phonological operations and audiovisual (AV) interactions during speech perception. Ambiguous acoustic syllables--disambiguated to /pa/ or /ta/ by the visual channel (speaking face)--served as test materials, concomitant with various control conditions (nonspeech AV signals, visual-only and acoustic-only speech, and nonspeech stimuli). (i) Visual speech yielded an AV-subadditive activation of primary auditory cortex and the anterior superior temporal gyrus (STG), whereas the posterior STG responded both to speech and nonspeech motion. (ii) The inferior frontal and the fusiform gyrus of the right hemisphere showed a strong phonetic/phonological impact (differential effects of visual /pa/ vs. /ta/) upon hemodynamic activation during presentation of speaking faces. Taken together with the previous MEG data, these results point at a dual-pathway model of visual speech information processing: On the one hand, access to the auditory system via the anterior supratemporal “what" path may give rise to direct activation of "auditory objects." On the other hand, visual speech information seems to be represented in a right-hemisphere visual working memory, providing a potential basis for later interactions with auditory information such as the McGurk effect.

  16. Domestic pigs' (Sus scrofa domestica) use of direct and indirect visual and auditory cues in an object choice task.

    PubMed

    Nawroth, Christian; von Borell, Eberhard

    2015-05-01

    Recently, foraging strategies have been linked to the ability to use indirect visual information. More selective feeders should express a higher aversion against losses compared to non-selective feeders and should therefore be more prone to avoid empty food locations. To extend these findings, in this study, we present a series of studies investigating the use of direct and indirect visual and auditory information by an omnivorous but selective feeder-the domestic pig. Subjects had to choose between two buckets, with only one containing a reward. Before making a choice, the subjects in Experiment 1 (N = 8) received full information regarding both the baited and non-baited location, either in a visual or auditory domain. In this experiment, the subjects were able to use visual but not auditory cues to infer the location of the reward spontaneously. Additionally, four individuals learned to use auditory cues after a period of training. In Experiment 2 (N = 8), the pigs were given different amounts of visual information about the content of the buckets-lifting either both of the buckets (full information), the baited bucket (direct information), the empty bucket (indirect information) or no bucket at all (no information). The subjects as a group were able to use direct and indirect visual cues. However, over the course of the experiment, the performance dropped to chance level when indirect information was provided. A final experiment (N = 3) provided preliminary results for pigs' use of indirect auditory information to infer the location of a reward. We conclude that pigs at a very young age are able to make decisions based on indirect information in the visual domain, whereas their performance in the use of indirect auditory information warrants further investigation.

  17. Malformation of the eighth cranial nerve in children.

    PubMed

    de Paula-Vernetta, Carlos; Muñoz-Fernández, Noelia; Mas-Estellés, Fernando; Guzmán-Calvete, Abel; Cavallé-Garrido, Laura; Morera-Pérez, Constantino

    2016-01-01

    Prevalence of congenital sensorineural hearing loss (SNHL) is approximately 1.5-6 in every 1,000 newborns. Dysfunction of the auditory nerve (auditory neuropathy) may be involved in up to 1%-10% of cases; hearing losses because of vestibulocochlear nerve (VCN) aplasia are less frequent. The objectives of this study were to describe clinical manifestations, hearing thresholds and aetiology of children with SNHL and VCN aplasia. We present 34 children (mean age 20 months) with auditory nerve malformation and profound HL taken from a sample of 385 children implanted in a 10-year period. We studied demographic characteristics, hearing, genetics, risk factors and associated malformations (Casselman's and Sennaroglu's classifications). Data were processed using a bivariate descriptive statistical analysis (P<.05). Of all the cases, 58.8% were bilateral (IIa/IIa and I/I were the most common). Of the unilateral cases, IIb was the most frequent. Auditory screening showed a sensitivity of 77.4%. A relationship among bilateral cases and systemic pathology was observed. We found a statistically significant difference when comparing hearing loss impairment and patients with different types of aplasia as defined by Casselman's classification. Computed tomography (CT) scan yielded a sensitivity of 46.3% and a specificity of 85.7%. However, magnetic resonance imaging (MRI) was the most sensitive imaging test. Ten percent of the children in a cochlear implant study had aplasia or hypoplasia of the auditory nerve. The degree of auditory loss was directly related to the different types of aplasia (Casselman's classification) Although CT scan and MRI are complementary, the MRI is the test of choice for detecting auditory nerve malformation. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.

  18. Acute Auditory Stimulation with Different Styles of Music Influences Cardiac Autonomic Regulation in Men

    PubMed Central

    da Silva, Sheila Ap. F.; Guida, Heraldo L.; dos Santos Antonio, Ana Marcia; de Abreu, Luiz Carlos; Monteiro, Carlos B. M.; Ferreira, Celso; Ribeiro, Vivian F.; Barnabe, Viviani; Silva, Sidney B.; Fonseca, Fernando L. A.; Adami, Fernando; Petenusso, Marcio; Raimundo, Rodrigo D.; Valenti, Vitor E.

    2014-01-01

    Background: No clear evidence is available in the literature regarding the acute effect of different styles of music on cardiac autonomic control. Objectives: The present study aimed to evaluate the acute effects of classical baroque and heavy metal musical auditory stimulation on Heart Rate Variability (HRV) in healthy men. Patients and Methods: In this study, HRV was analyzed regarding time (SDNN, RMSSD, NN50, and pNN50) and frequency domain (LF, HF, and LF / HF) in 12 healthy men. HRV was recorded at seated rest for 10 minutes. Subsequently, the participants were exposed to classical baroque or heavy metal music for five minutes through an earphone at seated rest. After exposure to the first song, they remained at rest for five minutes and they were again exposed to classical baroque or heavy metal music. The music sequence was random for each individual. Standard statistical methods were used for calculation of means and standard deviations. Besides, ANOVA and Friedman test were used for parametric and non-parametric distributions, respectively. Results: While listening to heavy metal music, SDNN was reduced compared to the baseline (P = 0.023). In addition, the LF index (ms2 and nu) was reduced during exposure to both heavy metal and classical baroque musical auditory stimulation compared to the control condition (P = 0.010 and P = 0.048, respectively). However, the HF index (ms2) was reduced only during auditory stimulation with music heavy metal (P = 0.01). The LF/HF ratio on the other hand decreased during auditory stimulation with classical baroque music (P = 0.019). Conclusions: Acute auditory stimulation with the selected heavy metal musical auditory stimulation decreased the sympathetic and parasympathetic modulation on the heart, while exposure to a selected classical baroque music reduced sympathetic regulation on the heart. PMID:25177673

  19. Audio–visual interactions for motion perception in depth modulate activity in visual area V3A

    PubMed Central

    Ogawa, Akitoshi; Macaluso, Emiliano

    2013-01-01

    Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414

  20. Cognitive fatigue in patients with myasthenia gravis.

    PubMed

    Jordan, Berit; Schweden, Tabea L K; Mehl, Theresa; Menge, Uwe; Zierz, Stephan

    2017-09-01

    Cognitive fatigue has frequently been reported in myasthenia gravis (MG). However, objective assessment of cognitive fatigability has never been evaluated. Thirty-three MG patients with stable generalized disease and 17 healthy controls underwent a test battery including repeated testing of attention and concentration (d2-R) and Paced Auditory Serial Addition Test. Fatigability was based on calculation of linear trend (LT) reflecting dynamic performance within subsequent constant time intervals. Additionally, fatigue questionnaires were used. MG patients showed a negative LT in second d2-R testing, indicating cognitive fatigability. This finding significantly differed from stable cognitive performance in controls (P < 0.05). Results of Paced Auditory Serial Addition Test testing did not differ between groups. Self-assessed fatigue was significantly higher in MG patients compared with controls (P < 0.001), but did not correlate with LT. LT quantifies cognitive fatigability as an objective measurement of performance decline in MG patients. Self-assessed cognitive fatigue is not correlated with objective findings. Muscle Nerve 56: 449-457, 2017. © 2016 Wiley Periodicals, Inc.

  1. Use of an Auditory Hallucination Simulation to Increase Student Pharmacist Empathy for Patients with Mental Illness

    PubMed Central

    Eukel, Heidi N.; Frenzel, Jeanne E.; Werremeyer, Amy; McDaniel, Becky

    2016-01-01

    Objective. To increase student pharmacist empathy through the use of an auditory hallucination simulation. Design. Third-year professional pharmacy students independently completed seven stations requiring skills such as communication, following directions, reading comprehension, and cognition while listening to an audio recording simulating what one experiencing auditory hallucinations may hear. Following the simulation, students participated in a faculty-led debriefing and completed a written reflection. Assessment. The Kiersma-Chen Empathy Scale was completed by each student before and after the simulation to measure changes in empathy. The written reflections were read and qualitatively analyzed. Empathy scores increased significantly after the simulation. Qualitative analysis showed students most frequently reported feeling distracted and frustrated. All student participants recommended the simulation be offered to other student pharmacists, and 99% felt the simulation would impact their future careers. Conclusions. With approximately 10 million adult Americans suffering from serious mental illness, it is important for pharmacy educators to prepare students to provide adequate patient care to this population. This auditory hallucination simulation increased student pharmacist empathy for patients with mental illness. PMID:27899838

  2. Subcortical encoding of sound is enhanced in bilinguals and relates to executive function advantages

    PubMed Central

    Krizman, Jennifer; Marian, Viorica; Shook, Anthony; Skoe, Erika; Kraus, Nina

    2012-01-01

    Bilingualism profoundly affects the brain, yielding functional and structural changes in cortical regions dedicated to language processing and executive function [Crinion J, et al. (2006) Science 312:1537–1540; Kim KHS, et al. (1997) Nature 388:171–174]. Comparatively, musical training, another type of sensory enrichment, translates to expertise in cognitive processing and refined biological processing of sound in both cortical and subcortical structures. Therefore, we asked whether bilingualism can also promote experience-dependent plasticity in subcortical auditory processing. We found that adolescent bilinguals, listening to the speech syllable [da], encoded the stimulus more robustly than age-matched monolinguals. Specifically, bilinguals showed enhanced encoding of the fundamental frequency, a feature known to underlie pitch perception and grouping of auditory objects. This enhancement was associated with executive function advantages. Thus, through experience-related tuning of attention, the bilingual auditory system becomes highly efficient in automatically processing sound. This study provides biological evidence for system-wide neural plasticity in auditory experts that facilitates a tight coupling of sensory and cognitive functions. PMID:22547804

  3. Auditory Performance and Electrical Stimulation Measures in Cochlear Implant Recipients With Auditory Neuropathy Compared With Severe to Profound Sensorineural Hearing Loss.

    PubMed

    Attias, Joseph; Greenstein, Tally; Peled, Miriam; Ulanovski, David; Wohlgelernter, Jay; Raveh, Eyal

    The aim of the study was to compare auditory and speech outcomes and electrical parameters on average 8 years after cochlear implantation between children with isolated auditory neuropathy (AN) and children with sensorineural hearing loss (SNHL). The study was conducted at a tertiary, university-affiliated pediatric medical center. The cohort included 16 patients with isolated AN with current age of 5 to 12.2 years who had been using a cochlear implant for at least 3.4 years and 16 control patients with SNHL matched for duration of deafness, age at implantation, type of implant, and unilateral/bilateral implant placement. All participants had had extensive auditory rehabilitation before and after implantation, including the use of conventional hearing aids. Most patients received Cochlear Nucleus devices, and the remainder either Med-El or Advanced Bionics devices. Unaided pure-tone audiograms were evaluated before and after implantation. Implantation outcomes were assessed by auditory and speech recognition tests in quiet and in noise. Data were also collected on the educational setting at 1 year after implantation and at school age. The electrical stimulation measures were evaluated only in the Cochlear Nucleus implant recipients in the two groups. Similar mapping and electrical measurement techniques were used in the two groups. Electrical thresholds, comfortable level, dynamic range, and objective neural response telemetry threshold were measured across the 22-electrode array in each patient. Main outcome measures were between-group differences in the following parameters: (1) Auditory and speech tests. (2) Residual hearing. (3) Electrical stimulation parameters. (4) Correlations of residual hearing at low frequencies with electrical thresholds at the basal, middle, and apical electrodes. The children with isolated AN performed equally well to the children with SNHL on auditory and speech recognition tests in both quiet and noise. More children in the AN group than the SNHL group were attending mainstream educational settings at school age, but the difference was not statistically significant. Significant between-group differences were noted in electrical measurements: the AN group was characterized by a lower current charge to reach subjective electrical thresholds, lower comfortable level and dynamic range, and lower telemetric neural response threshold. Based on pure-tone audiograms, the children with AN also had more residual hearing before and after implantation. Highly positive coefficients were found on correlation analysis between T levels across the basal and midcochlear electrodes and low-frequency acoustic thresholds. Prelingual children with isolated AN who fail to show expected oral and auditory progress after extensive rehabilitation with conventional hearing aids should be considered for cochlear implantation. Children with isolated AN had similar pattern as children with SNHL on auditory performance tests after cochlear implantation. The lower current charge required to evoke subjective and objective electrical thresholds in children with AN compared with children with SNHL may be attributed to the contribution to electrophonic hearing from the remaining neurons and hair cells. In addition, it is also possible that mechanical stimulation of the basilar membrane, as in acoustic stimulation, is added to the electrical stimulation of the cochlear implant.

  4. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    PubMed

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-10-01

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Multi-voxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    PubMed Central

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.; Munhall, Kevin G.; Cusack, Rhodri; Johnsrude, Ingrid S.

    2013-01-01

    The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multi-voxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was employed to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared to during passive listening. One network of regions appears to encode an ‘error signal’ irrespective of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a fronto-temporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Taken together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems. PMID:23467350

  6. Effect of conductive hearing loss on central auditory function.

    PubMed

    Bayat, Arash; Farhadi, Mohammad; Emamdjomeh, Hesam; Saki, Nader; Mirmomeni, Golshan; Rahim, Fakher

    It has been demonstrated that long-term Conductive Hearing Loss (CHL) may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP). It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control), aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN) test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p=0.004; left: p<0.001). Individuals with CHL had significantly lower correct responses than individuals with normal hearing for both sides (p<0.001). No correlation was found between GIN performance and degree of hearing loss in either group (p>0.05). The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  7. [In Process Citation

    PubMed

    Ackermann; Mathiak

    1999-11-01

    Pure word deafness (auditory verbal agnosia) is characterized by an impairment of auditory comprehension, repetition of verbal material and writing to dictation whereas spontaneous speech production and reading largely remain unaffected. Sometimes, this syndrome is preceded by complete deafness (cortical deafness) of varying duration. Perception of vowels and suprasegmental features of verbal utterances (e.g., intonation contours) seems to be less disrupted than the processing of consonants and, therefore, might mediate residual auditory functions. Often, lip reading and/or slowing of speaking rate allow within some limits to compensate for speech comprehension deficits. Apart from a few exceptions, the available reports of pure word deafness documented a bilateral temporal lesion. In these instances, as a rule, identification of nonverbal (environmental) sounds, perception of music, temporal resolution of sequential auditory cues and/or spatial localization of acoustic events were compromised as well. The observed variable constellation of auditory signs and symptoms in central hearing disorders following bilateral temporal disorders, most probably, reflects the multitude of functional maps at the level of the auditory cortices subserving, as documented in a variety of non-human species, the encoding of specific stimulus parameters each. Thus, verbal/nonverbal auditory agnosia may be considered a paradigm of distorted "auditory scene analysis" (Bregman 1990) affecting both primitive and schema-based perceptual processes. It cannot be excluded, however, that disconnection of the Wernicke-area from auditory input (Geschwind 1965) and/or an impairment of suggested "phonetic module" (Liberman 1996) contribute to the observed deficits as well. Conceivably, these latter mechanisms underly the rare cases of pure word deafness following a lesion restricted to the dominant hemisphere. Only few instances of a rather isolated disruption of the discrimination/identification of nonverbal sound sources, in the presence of uncompromised speech comprehension, have been reported so far (nonverbal auditory agnosia). As a rule, unilateral right-sided damage has been found to be the relevant lesion.

  8. Blast-induced tinnitus and hyperactivity in the auditory cortex of rats.

    PubMed

    Luo, Hao; Pace, Edward; Zhang, Jinsheng

    2017-01-06

    Blast exposure can cause tinnitus and hearing impairment by damaging the auditory periphery and direct impact to the brain, which trigger neural plasticity in both auditory and non-auditory centers. However, the underlying neurophysiological mechanisms of blast-induced tinnitus are still unknown. In this study, we induced tinnitus in rats using blast exposure and investigated changes in spontaneous firing and bursting activity in the auditory cortex (AC) at one day, one month, and three months after blast exposure. Our results showed that spontaneous activity in the tinnitus-positive group began changing at one month after blast exposure, and manifested as robust hyperactivity at all frequency regions at three months after exposure. We also observed an increased bursting rate in the low-frequency region at one month after blast exposure and in all frequency regions at three months after exposure. Taken together, spontaneous firing and bursting activity in the AC played an important role in blast-induced chronic tinnitus as opposed to acute tinnitus, thus favoring a bottom-up mechanism. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. The effects of speech output technology in the learning of graphic symbols.

    PubMed Central

    Schlosser, R W; Belfiore, P J; Nigam, R; Blischak, D; Hetzroni, O

    1995-01-01

    The effects of auditory stimuli in the form of synthetic speech output on the learning of graphic symbols were evaluated. Three adults with severe to profound mental retardation and communication impairments were taught to point to lexigrams when presented with words under two conditions. In the first condition, participants used a voice output communication aid to receive synthetic speech as antecedent and consequent stimuli. In the second condition, with a nonelectronic communications board, participants did not receive synthetic speech. A parallel treatments design was used to evaluate the effects of the synthetic speech output as an added component of the augmentative and alternative communication system. The 3 participants reached criterion when not provided with the auditory stimuli. Although 2 participants also reached criterion when not provided with the auditory stimuli, the addition of auditory stimuli resulted in more efficient learning and a decreased error rate. Maintenance results, however, indicated no differences between conditions. Finding suggest that auditory stimuli in the form of synthetic speech contribute to the efficient acquisition of graphic communication symbols. PMID:14743828

  10. Harmonic template neurons in primate auditory cortex underlying complex sound processing

    PubMed Central

    Feng, Lei

    2017-01-01

    Harmonicity is a fundamental element of music, speech, and animal vocalizations. How the auditory system extracts harmonic structures embedded in complex sounds and uses them to form a coherent unitary entity is not fully understood. Despite the prevalence of sounds rich in harmonic structures in our everyday hearing environment, it has remained largely unknown what neural mechanisms are used by the primate auditory cortex to extract these biologically important acoustic structures. In this study, we discovered a unique class of harmonic template neurons in the core region of auditory cortex of a highly vocal New World primate, the common marmoset (Callithrix jacchus), across the entire hearing frequency range. Marmosets have a rich vocal repertoire and a similar hearing range to that of humans. Responses of these neurons show nonlinear facilitation to harmonic complex sounds over inharmonic sounds, selectivity for particular harmonic structures beyond two-tone combinations, and sensitivity to harmonic number and spectral regularity. Our findings suggest that the harmonic template neurons in auditory cortex may play an important role in processing sounds with harmonic structures, such as animal vocalizations, human speech, and music. PMID:28096341

  11. The role of working memory in auditory selective attention.

    PubMed

    Dalton, Polly; Santangelo, Valerio; Spence, Charles

    2009-11-01

    A growing body of research now demonstrates that working memory plays an important role in controlling the extent to which irrelevant visual distractors are processed during visual selective attention tasks (e.g., Lavie, Hirst, De Fockert, & Viding, 2004). Recently, it has been shown that the successful selection of tactile information also depends on the availability of working memory (Dalton, Lavie, & Spence, 2009). Here, we investigate whether working memory plays a role in auditory selective attention. Participants focused their attention on short continuous bursts of white noise (targets) while attempting to ignore pulsed bursts of noise (distractors). Distractor interference in this auditory task, as measured in terms of the difference in performance between congruent and incongruent distractor trials, increased significantly under high (vs. low) load in a concurrent working-memory task. These results provide the first evidence demonstrating a causal role for working memory in reducing interference by irrelevant auditory distractors.

  12. Auditory and Visual Cues for Topic Maintenance with Persons Who Exhibit Dementia of Alzheimer's Type.

    PubMed

    Teten, Amy F; Dagenais, Paul A; Friehe, Mary J

    2015-01-01

    This study compared the effectiveness of auditory and visual redirections in facilitating topic coherence for persons with Dementia of Alzheimer's Type (DAT). Five persons with moderate stage DAT engaged in conversation with the first author. Three topics related to activities of daily living, recreational activities, food, and grooming, were broached. Each topic was presented three times to each participant: once as a baseline condition, once with auditory redirection to topic, and once with visual redirection to topic. Transcripts of the interactions were scored for overall coherence. Condition was a significant factor in that the DAT participants exhibited better topic maintenance under visual and auditory conditions as opposed to baseline. In general, the performance of the participants was not affected by the topic, except for significantly higher overall coherence ratings for the visually redirected interactions dealing with the topic of food.

  13. Assessment of auditory impression of the coolness and warmness of automotive HVAC noise.

    PubMed

    Nakagawa, Seiji; Hotehama, Takuya; Kamiya, Masaru

    2017-07-01

    Noise induced by a heating, ventilation and air conditioning (HVAC) system in a vehicle is an important factor that affects the comfort of the interior of a car cabin. Much effort has been devoted to reduce noise levels, however, there is a need for a new sound design that addresses the noise problem from a different point of view. In this study, focusing on the auditory impression of automotive HVAC noise concerning coolness and warmness, psychoacoustical listening tests were performed using a paired comparison technique under various conditions of room temperature. Five stimuli were synthesized by stretching the spectral envelopes of recorded automotive HVAC noise to assess the effect of the spectral centroid, and were presented to normal-hearing subjects. Results show that the spectral centroid significantly affects the auditory impression concerning coolness and warmness; a higher spectral centroid induces a cooler auditory impression regardless of the room temperature.

  14. Psychophysics and Neuronal Bases of Sound Localization in Humans

    PubMed Central

    Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.

    2013-01-01

    Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698

  15. Ultrasound Produces Extensive Brain Activation via a Cochlear Pathway.

    PubMed

    Guo, Hongsun; Hamilton, Mark; Offutt, Sarah J; Gloeckner, Cory D; Li, Tianqi; Kim, Yohan; Legon, Wynn; Alford, Jamu K; Lim, Hubert H

    2018-06-06

    Ultrasound (US) can noninvasively activate intact brain circuits, making it a promising neuromodulation technique. However, little is known about the underlying mechanism. Here, we apply transcranial US and perform brain mapping studies in guinea pigs using extracellular electrophysiology. We find that US elicits extensive activation across cortical and subcortical brain regions. However, transection of the auditory nerves or removal of cochlear fluids eliminates the US-induced activity, revealing an indirect auditory mechanism for US neural activation. Our findings indicate that US activates the ascending auditory system through a cochlear pathway, which can activate other non-auditory regions through cross-modal projections. This cochlear pathway mechanism challenges the idea that US can directly activate neurons in the intact brain, suggesting that future US stimulation studies will need to control for this effect to reach reliable conclusions. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Auditory and Visual Cues for Topic Maintenance with Persons Who Exhibit Dementia of Alzheimer's Type

    PubMed Central

    Teten, Amy F.; Dagenais, Paul A.; Friehe, Mary J.

    2015-01-01

    This study compared the effectiveness of auditory and visual redirections in facilitating topic coherence for persons with Dementia of Alzheimer's Type (DAT). Five persons with moderate stage DAT engaged in conversation with the first author. Three topics related to activities of daily living, recreational activities, food, and grooming, were broached. Each topic was presented three times to each participant: once as a baseline condition, once with auditory redirection to topic, and once with visual redirection to topic. Transcripts of the interactions were scored for overall coherence. Condition was a significant factor in that the DAT participants exhibited better topic maintenance under visual and auditory conditions as opposed to baseline. In general, the performance of the participants was not affected by the topic, except for significantly higher overall coherence ratings for the visually redirected interactions dealing with the topic of food. PMID:26171273

  17. Do you remember where sounds, pictures and words came from? The role of the stimulus format in object location memory.

    PubMed

    Delogu, Franco; Lilla, Christopher C

    2017-11-01

    Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.

  18. Biologically inspired computation and learning in Sensorimotor Systems

    NASA Astrophysics Data System (ADS)

    Lee, Daniel D.; Seung, H. S.

    2001-11-01

    Networking systems presently lack the ability to intelligently process the rich multimedia content of the data traffic they carry. Endowing artificial systems with the ability to adapt to changing conditions requires algorithms that can rapidly learn from examples. We demonstrate the application of such learning algorithms on an inexpensive quadruped robot constructed to perform simple sensorimotor tasks. The robot learns to track a particular object by discovering the salient visual and auditory cues unique to that object. The system uses a convolutional neural network that automatically combines color, luminance, motion, and auditory information. The weights of the networks are adjusted using feedback from a teacher to reflect the reliability of the various input channels in the surrounding environment. Additionally, the robot is able to compensate for its own motion by adapting the parameters of a vestibular ocular reflex system.

  19. A biologically plausible computational model for auditory object recognition.

    PubMed

    Larson, Eric; Billimoria, Cyrus P; Sen, Kamal

    2009-01-01

    Object recognition is a task of fundamental importance for sensory systems. Although this problem has been intensively investigated in the visual system, relatively little is known about the recognition of complex auditory objects. Recent work has shown that spike trains from individual sensory neurons can be used to discriminate between and recognize stimuli. Multiple groups have developed spike similarity or dissimilarity metrics to quantify the differences between spike trains. Using a nearest-neighbor approach the spike similarity metrics can be used to classify the stimuli into groups used to evoke the spike trains. The nearest prototype spike train to the tested spike train can then be used to identify the stimulus. However, how biological circuits might perform such computations remains unclear. Elucidating this question would facilitate the experimental search for such circuits in biological systems, as well as the design of artificial circuits that can perform such computations. Here we present a biologically plausible model for discrimination inspired by a spike distance metric using a network of integrate-and-fire model neurons coupled to a decision network. We then apply this model to the birdsong system in the context of song discrimination and recognition. We show that the model circuit is effective at recognizing individual songs, based on experimental input data from field L, the avian primary auditory cortex analog. We also compare the performance and robustness of this model to two alternative models of song discrimination: a model based on coincidence detection and a model based on firing rate.

  20. The Effects of Repeated Low-Level Blast Exposure on Hearing in Marines

    PubMed Central

    Kubli, Lina R.; Pinto, Robin L.; Burrows, Holly L.; Littlefield, Philip D.; Brungart, Douglas S.

    2017-01-01

    Background: The study evaluates a group of Military Service Members specialized in blast explosive training called “Breachers” who are routinely exposed to multiple low-level blasts while teaching breaching at the U.S. Marine Corps in Quantico Virginia. The objective of this study was to determine if there are any acute or long-term auditory changes due to repeated low-level blast exposures used in training. The performance of the instructor group “Breachers” was compared to a control group, “Engineers”. Methods: A total of 11 Breachers and four engineers were evaluated in the study. The participants received comprehensive auditory tests, including pure-tone testing, speech-in-noise (SIN) measures, and central auditory behavioral and objective tests using early and late (P300) auditory evoked potentials over a period of 17 months. They also received shorter assessments immediately following the blast-exposure onsite at Quantico. Results: No acute or longitudinal effects were identified. However, there were some interesting baseline effects found in both groups. Contrary to the expected, the onsite hearing thresholds and distortion product otoacoustic emissions were slightly better at a few frequencies immediately after blast-exposure than measurements obtained with the same equipment weeks to months after each blast-exposure. Conclusions: To date, the current study is the most comprehensive study that evaluates the long-term effects of blast-exposure on hearing. Despite extensive testing to assess changes, the findings of this study suggest that the levels of current exposures used in this military training environment do not seem to have an obvious deleterious effect on hearing. PMID:28937017

Top