Memory as embodiment: The case of modality and serial short-term memory.
Macken, Bill; Taylor, John C; Kozlov, Michail D; Hughes, Robert W; Jones, Dylan M
2016-10-01
Classical explanations for the modality effect-superior short-term serial recall of auditory compared to visual sequences-typically recur to privileged processing of information derived from auditory sources. Here we critically appraise such accounts, and re-evaluate the nature of the canonical empirical phenomena that have motivated them. Three experiments show that the standard account of modality in memory is untenable, since auditory superiority in recency is often accompanied by visual superiority in mid-list serial positions. We explain this simultaneous auditory and visual superiority by reference to the way in which perceptual objects are formed in the two modalities and how those objects are mapped to speech motor forms to support sequence maintenance and reproduction. Specifically, stronger obligatory object formation operating in the standard auditory form of sequence presentation compared to that for visual sequences leads both to enhanced addressability of information at the object boundaries and reduced addressability for that in the interior. Because standard visual presentation does not lead to such object formation, such sequences do not show the boundary advantage observed for auditory presentation, but neither do they suffer loss of addressability associated with object information, thereby affording more ready mapping of that information into a rehearsal cohort to support recall. We show that a range of factors that impede this perceptual-motor mapping eliminate visual superiority while leaving auditory superiority unaffected. We make a general case for viewing short-term memory as an embodied, perceptual-motor process. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
The neural basis of visual dominance in the context of audio-visual object processing.
Schmid, Carmen; Büchel, Christian; Rose, Michael
2011-03-01
Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.
Selective attention in normal and impaired hearing.
Shinn-Cunningham, Barbara G; Best, Virginia
2008-12-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.
Selective Attention in Normal and Impaired Hearing
Shinn-Cunningham, Barbara G.; Best, Virginia
2008-01-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202
Educators Prescriptive Handbook: A Developmental Sequence of Learning Skills.
ERIC Educational Resources Information Center
Santa Ana Unified School District, CA.
The handbook lists 141 developmental objectives with instructions for remediation to aid children with learning problems in the areas of sensory motor development, auditory perception, language, visual perception, and academic achievement. Objectives are listed in chart format with each objective associated with one or more skill examples,…
The what, where and how of auditory-object perception.
Bizley, Jennifer K; Cohen, Yale E
2013-10-01
The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.
The what, where and how of auditory-object perception
Bizley, Jennifer K.; Cohen, Yale E.
2014-01-01
The fundamental perceptual unit in hearing is the ‘auditory object’. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood. PMID:24052177
Smulders, Tom V; Jarvis, Erich D
2013-11-01
Repeated exposure to an auditory stimulus leads to habituation of the electrophysiological and immediate-early-gene (IEG) expression response in the auditory system. A novel auditory stimulus reinstates this response in a form of dishabituation. This has been interpreted as the start of new memory formation for this novel stimulus. Changes in the location of an otherwise identical auditory stimulus can also dishabituate the IEG expression response. This has been interpreted as an integration of stimulus identity and stimulus location into a single auditory object, encoded in the firing patterns of the auditory system. In this study, we further tested this hypothesis. Using chronic multi-electrode arrays to record multi-unit activity from the auditory system of awake and behaving zebra finches, we found that habituation occurs to repeated exposure to the same song and dishabituation with a novel song, similar to that described in head-fixed, restrained animals. A large proportion of recording sites also showed dishabituation when the same auditory stimulus was moved to a novel location. However, when the song was randomly moved among 8 interleaved locations, habituation occurred independently of the continuous changes in location. In contrast, when 8 different auditory stimuli were interleaved all from the same location, a separate habituation occurred to each stimulus. This result suggests that neuronal memories of the acoustic identity and spatial location are different, and that allocentric location of a stimulus is not encoded as part of the memory for an auditory object, while its acoustic properties are. We speculate that, instead, the dishabituation that occurs with a change from a stable location of a sound is due to the unexpectedness of the location change, and might be due to different underlying mechanisms than the dishabituation and separate habituations to different acoustic stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.
Echolocation system of the bottlenose dolphin
NASA Astrophysics Data System (ADS)
Dubrovsky, N. A.
2004-05-01
The hypothesis put forward by Vel’min and Dubrovsky [1] is discussed. The hypothesis suggests that bottlenose dolphins possess two functionally separate auditory subsystems: one of them serves for analyzing extraneous sounds, as in nonecholocating terrestrial animals, and the other performs the analysis of echoes caused by the echolocation clicks of the animal itself. The first subsystem is called passive hearing, and the second, active hearing. The results of experimental studies of dolphin’s echolocation system are discussed to confirm the proposed hypothesis. For the active hearing of dolphins, the notion of a critical interval is considered as the interval of time within which the formation of a merged auditory image of the echolocation object is formed when all echo highlights of the echo from this object fall within the critical interval.
Cortical Representations of Speech in a Multitalker Auditory Scene.
Puvvada, Krishna C; Simon, Jonathan Z
2017-09-20
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.
Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael
2014-01-01
Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called "cocktail-party" problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.
Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael
2014-01-01
Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called “cocktail-party” problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments. PMID:25540608
Jones, S J; Longe, O; Vaz Pato, M
1998-03-01
Examination of the cortical auditory evoked potentials to complex tones changing in pitch and timbre suggests a useful new method for investigating higher auditory processes, in particular those concerned with 'streaming' and auditory object formation. The main conclusions were: (i) the N1 evoked by a sudden change in pitch or timbre was more posteriorly distributed than the N1 at the onset of the tone, indicating at least partial segregation of the neuronal populations responsive to sound onset and spectral change; (ii) the T-complex was consistently larger over the right hemisphere, consistent with clinical and PET evidence for particular involvement of the right temporal lobe in the processing of timbral and musical material; (iii) responses to timbral change were relatively unaffected by increasing the rate of interspersed changes in pitch, suggesting a mechanism for detecting the onset of a new voice in a constantly modulated sound stream; (iv) responses to onset, offset and pitch change of complex tones were relatively unaffected by interfering tones when the latter were of a different timbre, suggesting these responses must be generated subsequent to auditory stream segregation.
Cortical mechanisms for the segregation and representation of acoustic textures.
Overath, Tobias; Kumar, Sukhbinder; Stewart, Lauren; von Kriegstein, Katharina; Cusack, Rhodri; Rees, Adrian; Griffiths, Timothy D
2010-02-10
Auditory object analysis requires two fundamental perceptual processes: the definition of the boundaries between objects, and the abstraction and maintenance of an object's characteristic features. Although it is intuitive to assume that the detection of the discontinuities at an object's boundaries precedes the subsequent precise representation of the object, the specific underlying cortical mechanisms for segregating and representing auditory objects within the auditory scene are unknown. We investigated the cortical bases of these two processes for one type of auditory object, an "acoustic texture," composed of multiple frequency-modulated ramps. In these stimuli, we independently manipulated the statistical rules governing (1) the frequency-time space within individual textures (comprising ramps with a given spectrotemporal coherence) and (2) the boundaries between textures (adjacent textures with different spectrotemporal coherences). Using functional magnetic resonance imaging, we show mechanisms defining boundaries between textures with different coherences in primary and association auditory cortices, whereas texture coherence is represented only in association cortex. Furthermore, participants' superior detection of boundaries across which texture coherence increased (as opposed to decreased) was reflected in a greater neural response in auditory association cortex at these boundaries. The results suggest a hierarchical mechanism for processing acoustic textures that is relevant to auditory object analysis: boundaries between objects are first detected as a change in statistical rules over frequency-time space, before a representation that corresponds to the characteristics of the perceived object is formed.
Werner, Sebastian; Noppeney, Uta
2010-02-17
Multisensory interactions have been demonstrated in a distributed neural system encompassing primary sensory and higher-order association areas. However, their distinct functional roles in multisensory integration remain unclear. This functional magnetic resonance imaging study dissociated the functional contributions of three cortical levels to multisensory integration in object categorization. Subjects actively categorized or passively perceived noisy auditory and visual signals emanating from everyday actions with objects. The experiment included two 2 x 2 factorial designs that manipulated either (1) the presence/absence or (2) the informativeness of the sensory inputs. These experimental manipulations revealed three patterns of audiovisual interactions. (1) In primary auditory cortices (PACs), a concurrent visual input increased the stimulus salience by amplifying the auditory response regardless of task-context. Effective connectivity analyses demonstrated that this automatic response amplification is mediated via both direct and indirect [via superior temporal sulcus (STS)] connectivity to visual cortices. (2) In STS and intraparietal sulcus (IPS), audiovisual interactions sustained the integration of higher-order object features and predicted subjects' audiovisual benefits in object categorization. (3) In the left ventrolateral prefrontal cortex (vlPFC), explicit semantic categorization resulted in suppressive audiovisual interactions as an index for multisensory facilitation of semantic retrieval and response selection. In conclusion, multisensory integration emerges at multiple processing stages within the cortical hierarchy. The distinct profiles of audiovisual interactions dissociate audiovisual salience effects in PACs, formation of object representations in STS/IPS and audiovisual facilitation of semantic categorization in vlPFC. Furthermore, in STS/IPS, the profiles of audiovisual interactions were behaviorally relevant and predicted subjects' multisensory benefits in performance accuracy.
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.; Jenison, Rick
1995-01-01
All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.
Auditory memory can be object based.
Dyson, Benjamin J; Ishfaq, Feraz
2008-04-01
Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.
Feature assignment in perception of auditory figure.
Gregg, Melissa K; Samuel, Arthur G
2012-08-01
Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory "objects" (relatively punctate events, such as a dog's bark) and auditory "streams" (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial 2 sounds-an object (a vowel) and a stream (a series of tones)-were presented with 1 target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to 1 of the 2 sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to 3. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed. (c) 2012 APA, all rights reserved.
Multisensory guidance of orienting behavior.
Maier, Joost X; Groh, Jennifer M
2009-12-01
We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.
A Dual-Process Account of Auditory Change Detection
ERIC Educational Resources Information Center
McAnally, Ken I.; Martin, Russell L.; Eramudugolla, Ranmalee; Stuart, Geoffrey W.; Irvine, Dexter R. F.; Mattingley, Jason B.
2010-01-01
Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed…
Auditory, visual, and bimodal data link displays and how they support pilot performance.
Steelman, Kelly S; Talleur, Donald; Carbonari, Ronald; Yamani, Yusuke; Nunes, Ashley; McCarley, Jason S
2013-06-01
The design of data link messaging systems to ensure optimal pilot performance requires empirical guidance. The current study examined the effects of display format (auditory, visual, or bimodal) and visual display position (adjacent to instrument panel or mounted on console) on pilot performance. Subjects performed five 20-min simulated single-pilot flights. During each flight, subjects received messages from a simulated air traffic controller. Messages were delivered visually, auditorily, or bimodally. Subjects were asked to read back each message aloud and then perform the instructed maneuver. Visual and bimodal displays engendered lower subjective workload and better altitude tracking than auditory displays. Readback times were shorter with the two unimodal visual formats than with any of the other three formats. Advantages for the unimodal visual format ranged in size from 2.8 s to 3.8 s relative to the bimodal upper left and auditory formats, respectively. Auditory displays allowed slightly more head-up time (3 to 3.5 seconds per minute) than either visual or bimodal displays. Position of the visual display had only modest effects on any measure. Combined with the results from previous studies by Helleberg and Wickens and Lancaster and Casali the current data favor visual and bimodal displays over auditory displays; unimodal auditory displays were favored by only one measure, head-up time, and only very modestly. Data evinced no statistically significant effects of visual display position on performance, suggesting that, contrary to expectations, the placement of a visual data link display may be of relatively little consequence to performance.
Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude
2016-06-01
Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
Concept Formation Skills in Long-Term Cochlear Implant Users
ERIC Educational Resources Information Center
Castellanos, Irina; Kronenberger, William G.; Beer, Jessica; Colson, Bethany G.; Henning, Shirley C.; Ditmars, Allison; Pisoni, David B.
2015-01-01
This study investigated if a period of auditory sensory deprivation followed by degraded auditory input and related language delays affects visual concept formation skills in long-term prelingually deaf cochlear implant (CI) users. We also examined if concept formation skills are mediated or moderated by other neurocognitive domains (i.e.,…
Incorporating Auditory Models in Speech/Audio Applications
NASA Astrophysics Data System (ADS)
Krishnamoorthi, Harish
2011-12-01
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
Delogu, Franco; Lilla, Christopher C
2017-11-01
Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.
A. Smith, Nicholas; A. Folland, Nicholas; Martinez, Diana M.; Trainor, Laurel J.
2017-01-01
Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain et al., 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. PMID:28346869
Design of Training Systems, Phase II-A Report. An Educational Technology Assessment Model (ETAM)
1975-07-01
34format" for the perceptual tasks. This is applicable to auditory as well as visual tasks. Student Participation in Learning Route. When a student enters...skill formats Skill training 05.05 Vehicle properties Instructional functions: Type of stimulus presented to student visual auditory ...Subtask 05.05. For example, a trainer to identify and interpret auditory signals would not be represented in the above list. Trainers in the vehicle
Hearing rehabilitation in Treacher Collins Syndrome with bone anchored hearing aid
Polanski, José Fernando; Plawiak, Anna Clara; Ribas, Angela
2015-01-01
Objective: To describe a case of hearing rehabilitation with bone anchored hearing aid in a patient with Treacher Collins syndrome. Case description: 3 years old patient, male, with Treacher Collins syndrome and severe complications due to the syndrome, mostly related to the upper airway and hearing. He had bilateral atresia of external auditory canals, and malformation of the pinna. The initial hearing rehabilitation was with bone vibration arch, but there was poor acceptance due the discomfort caused by skull compression. It was prescribed a model of bone-anchored hearing aid, in soft band format. The results were evaluated through behavioral hearing tests and questionnaires Meaningful Use of Speech Scale (MUSS) and Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS). Comments: The patient had a higher acceptance of the bone-anchored hearing aid compared to the traditional bone vibration arch. Audiological tests and the speech and auditory skills assessments also showed better communication and hearing outcomes. The bone-anchored hearing aid is a good option in hearing rehabilitation in this syndrome. PMID:26298651
Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.
Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M
2013-11-01
Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.
Auditory-visual object recognition time suggests specific processing for animal sounds.
Suied, Clara; Viaud-Delmon, Isabelle
2009-01-01
Recognizing an object requires binding together several cues, which may be distributed across different sensory modalities, and ignoring competing information originating from other objects. In addition, knowledge of the semantic category of an object is fundamental to determine how we should react to it. Here we investigate the role of semantic categories in the processing of auditory-visual objects. We used an auditory-visual object-recognition task (go/no-go paradigm). We compared recognition times for two categories: a biologically relevant one (animals) and a non-biologically relevant one (means of transport). Participants were asked to react as fast as possible to target objects, presented in the visual and/or the auditory modality, and to withhold their response for distractor objects. A first main finding was that, when participants were presented with unimodal or bimodal congruent stimuli (an image and a sound from the same object), similar reaction times were observed for all object categories. Thus, there was no advantage in the speed of recognition for biologically relevant compared to non-biologically relevant objects. A second finding was that, in the presence of a biologically relevant auditory distractor, the processing of a target object was slowed down, whether or not it was itself biologically relevant. It seems impossible to effectively ignore an animal sound, even when it is irrelevant to the task. These results suggest a specific and mandatory processing of animal sounds, possibly due to phylogenetic memory and consistent with the idea that hearing is particularly efficient as an alerting sense. They also highlight the importance of taking into account the auditory modality when investigating the way object concepts of biologically relevant categories are stored and retrieved.
Non-visual spatial tasks reveal increased interactions with stance postural control.
Woollacott, Marjorie; Vander Velde, Timothy
2008-05-07
The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.
Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.
Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B
2003-04-01
The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.
Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants
Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.
2012-01-01
The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380
Cui, Zhuang; Wang, Qian; Gao, Yayue; Wang, Jing; Wang, Mengyang; Teng, Pengfei; Guan, Yuguang; Zhou, Jian; Li, Tianfu; Luan, Guoming; Li, Liang
2017-01-01
The arrival of sound signals in the auditory cortex (AC) triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC) and extrinsic functional connectivity (eFC) of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices). Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.
Selective Attention to Auditory Memory Neurally Enhances Perceptual Precision.
Lim, Sung-Joo; Wöstmann, Malte; Obleser, Jonas
2015-12-09
Selective attention to a task-relevant stimulus facilitates encoding of that stimulus into a working memory representation. It is less clear whether selective attention also improves the precision of a stimulus already represented in memory. Here, we investigate the behavioral and neural dynamics of selective attention to representations in auditory working memory (i.e., auditory objects) using psychophysical modeling and model-based analysis of electroencephalographic signals. Human listeners performed a syllable pitch discrimination task where two syllables served as to-be-encoded auditory objects. Valid (vs neutral) retroactive cues were presented during retention to allow listeners to selectively attend to the to-be-probed auditory object in memory. Behaviorally, listeners represented auditory objects in memory more precisely (expressed by steeper slopes of a psychometric curve) and made faster perceptual decisions when valid compared to neutral retrocues were presented. Neurally, valid compared to neutral retrocues elicited a larger frontocentral sustained negativity in the evoked potential as well as enhanced parietal alpha/low-beta oscillatory power (9-18 Hz) during memory retention. Critically, individual magnitudes of alpha oscillatory power (7-11 Hz) modulation predicted the degree to which valid retrocues benefitted individuals' behavior. Our results indicate that selective attention to a specific object in auditory memory does benefit human performance not by simply reducing memory load, but by actively engaging complementary neural resources to sharpen the precision of the task-relevant object in memory. Can selective attention improve the representational precision with which objects are held in memory? And if so, what are the neural mechanisms that support such improvement? These issues have been rarely examined within the auditory modality, in which acoustic signals change and vanish on a milliseconds time scale. Introducing a new auditory memory paradigm and using model-based electroencephalography analyses in humans, we thus bridge this gap and reveal behavioral and neural signatures of increased, attention-mediated working memory precision. We further show that the extent of alpha power modulation predicts the degree to which individuals' memory performance benefits from selective attention. Copyright © 2015 the authors 0270-6474/15/3516094-11$15.00/0.
Chen, Yu-Chen; Li, Xiaowei; Liu, Lijie; Wang, Jian; Lu, Chun-Qiang; Yang, Ming; Jiao, Yun; Zang, Feng-Chao; Radziwon, Kelly; Chen, Guang-Di; Sun, Wei; Krishnan Muthaiah, Vijaya Prakash; Salvi, Richard; Teng, Gao-Jun
2015-01-01
Hearing loss often triggers an inescapable buzz (tinnitus) and causes everyday sounds to become intolerably loud (hyperacusis), but exactly where and how this occurs in the brain is unknown. To identify the neural substrate for these debilitating disorders, we induced both tinnitus and hyperacusis with an ototoxic drug (salicylate) and used behavioral, electrophysiological, and functional magnetic resonance imaging (fMRI) techniques to identify the tinnitus–hyperacusis network. Salicylate depressed the neural output of the cochlea, but vigorously amplified sound-evoked neural responses in the amygdala, medial geniculate, and auditory cortex. Resting-state fMRI revealed hyperactivity in an auditory network composed of inferior colliculus, medial geniculate, and auditory cortex with side branches to cerebellum, amygdala, and reticular formation. Functional connectivity revealed enhanced coupling within the auditory network and segments of the auditory network and cerebellum, reticular formation, amygdala, and hippocampus. A testable model accounting for distress, arousal, and gating of tinnitus and hyperacusis is proposed. DOI: http://dx.doi.org/10.7554/eLife.06576.001 PMID:25962854
Neural dynamics underlying attentional orienting to auditory representations in short-term memory.
Backer, Kristina C; Binns, Malcolm A; Alain, Claude
2015-01-21
Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics. Copyright © 2015 the authors 0270-6474/15/351307-12$15.00/0.
Harris, J L; Salus, D; Rerecich, R; Larsen, D
1996-01-01
Assertions made by Merikle (1988) regarding audio subliminal messages were tested. Seventeen participants were presented subliminal messages embedded in a white-noise cover, and three signal-to-noise (S/N) detection ratios were examined. Participants were asked to guess message presence and message content, to determine subjective/objective thresholds. Results showed that participants were unable to identify target words presented in this audio subliminal stimulus format beyond chance levels.
Plasticity in neuromagnetic cortical responses suggests enhanced auditory object representation
2013-01-01
Background Auditory perceptual learning persistently modifies neural networks in the central nervous system. Central auditory processing comprises a hierarchy of sound analysis and integration, which transforms an acoustical signal into a meaningful object for perception. Based on latencies and source locations of auditory evoked responses, we investigated which stage of central processing undergoes neuroplastic changes when gaining auditory experience during passive listening and active perceptual training. Young healthy volunteers participated in a five-day training program to identify two pre-voiced versions of the stop-consonant syllable ‘ba’, which is an unusual speech sound to English listeners. Magnetoencephalographic (MEG) brain responses were recorded during two pre-training and one post-training sessions. Underlying cortical sources were localized, and the temporal dynamics of auditory evoked responses were analyzed. Results After both passive listening and active training, the amplitude of the P2m wave with latency of 200 ms increased considerably. By this latency, the integration of stimulus features into an auditory object for further conscious perception is considered to be complete. Therefore the P2m changes were discussed in the light of auditory object representation. Moreover, P2m sources were localized in anterior auditory association cortex, which is part of the antero-ventral pathway for object identification. The amplitude of the earlier N1m wave, which is related to processing of sensory information, did not change over the time course of the study. Conclusion The P2m amplitude increase and its persistence over time constitute a neuroplastic change. The P2m gain likely reflects enhanced object representation after stimulus experience and training, which enables listeners to improve their ability for scrutinizing fine differences in pre-voicing time. Different trajectories of brain and behaviour changes suggest that the preceding effect of a P2m increase relates to brain processes, which are necessary precursors of perceptual learning. Cautious discussion is required when interpreting the finding of a P2 amplitude increase between recordings before and after training and learning. PMID:24314010
Emergence of neural encoding of auditory objects while listening to competing speakers
Ding, Nai; Simon, Jonathan Z.
2012-01-01
A visual scene is perceived in terms of visual objects. Similar ideas have been proposed for the analogous case of auditory scene analysis, although their hypothesized neural underpinnings have not yet been established. Here, we address this question by recording from subjects selectively listening to one of two competing speakers, either of different or the same sex, using magnetoencephalography. Individual neural representations are seen for the speech of the two speakers, with each being selectively phase locked to the rhythm of the corresponding speech stream and from which can be exclusively reconstructed the temporal envelope of that speech stream. The neural representation of the attended speech dominates responses (with latency near 100 ms) in posterior auditory cortex. Furthermore, when the intensity of the attended and background speakers is separately varied over an 8-dB range, the neural representation of the attended speech adapts only to the intensity of that speaker but not to the intensity of the background speaker, suggesting an object-level intensity gain control. In summary, these results indicate that concurrent auditory objects, even if spectrotemporally overlapping and not resolvable at the auditory periphery, are neurally encoded individually in auditory cortex and emerge as fundamental representational units for top-down attentional modulation and bottom-up neural adaptation. PMID:22753470
Age-Related Deficits in Auditory Confrontation Naming
Hanna-Pladdy, Brenda; Choi, Hyun
2015-01-01
The naming of manipulable objects in older and younger adults was evaluated across auditory, visual, and multisensory conditions. Older adults were less accurate and slower in naming across conditions, and all subjects were more impaired and slower to name action sounds than pictures or audiovisual combinations. Moreover, there was a sensory by age group interaction, revealing lower accuracy and increased latencies in auditory naming for older adults unrelated to hearing insensitivity but modest improvement to multisensory cues. These findings support age-related deficits in object action naming and suggest that auditory confrontation naming may be more sensitive than visual naming. PMID:20677880
Crosscheck Principle in Pediatric Audiology Today: A 40-Year Perspective
2016-01-01
The crosscheck principle is just as important in pediatric audiology as it was when first described 40 years ago. That is, no auditory test result should be accepted and used in the diagnosis of hearing loss until it is confirmed or crosschecked by one or more independent measures. Exclusive reliance on only one or two tests, even objective auditory measures, may result in a auditory diagnosis that is not clear or perhaps incorrect. On the other hand, close and careful analysis of findings for a test battery consisting of objective procedures and behavioral tests whenever feasible usually leads to prompt and accurate diagnosis of auditory dysfunction. This paper provides a concise review of the crosscheck principle from its introduction to its clinical application today. The review concludes with a description of a modern test battery for pediatric hearing assessment that supplements traditional behavioral tests with a variety of independent objective procedures including aural immittance measures, otoacoustic emissions, and auditory evoked responses. PMID:27626077
Crosscheck Principle in Pediatric Audiology Today: A 40-Year Perspective.
Hall, James W
2016-09-01
The crosscheck principle is just as important in pediatric audiology as it was when first described 40 years ago. That is, no auditory test result should be accepted and used in the diagnosis of hearing loss until it is confirmed or crosschecked by one or more independent measures. Exclusive reliance on only one or two tests, even objective auditory measures, may result in a auditory diagnosis that is not clear or perhaps incorrect. On the other hand, close and careful analysis of findings for a test battery consisting of objective procedures and behavioral tests whenever feasible usually leads to prompt and accurate diagnosis of auditory dysfunction. This paper provides a concise review of the crosscheck principle from its introduction to its clinical application today. The review concludes with a description of a modern test battery for pediatric hearing assessment that supplements traditional behavioral tests with a variety of independent objective procedures including aural immittance measures, otoacoustic emissions, and auditory evoked responses.
Functional neuroanatomy of auditory scene analysis in Alzheimer's disease
Golden, Hannah L.; Agustus, Jennifer L.; Goll, Johanna C.; Downey, Laura E.; Mummery, Catherine J.; Schott, Jonathan M.; Crutch, Sebastian J.; Warren, Jason D.
2015-01-01
Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology. PMID:26029629
ERIC Educational Resources Information Center
Zupan, Barbra; Sussman, Joan E.
2009-01-01
Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…
Nonlinear Processing of Auditory Brainstem Response
2001-10-25
Kraków, Poland Abstract: - Auditory brainstem response potentials (ABR) are signals calculated from the EEG signals registered as responses to an...acoustic activation of the auditory system. The ABR signals provide an objective, diagnostic method, widely applied in examinations of hearing organs
Perceptual Plasticity for Auditory Object Recognition
Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.
2017-01-01
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed. PMID:28588524
Filling-in visual motion with sounds.
Väljamäe, A; Soto-Faraco, S
2008-10-01
Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.
EEG signatures accompanying auditory figure-ground segregation
Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P.; Szerafin, Ágnes; Shinn-Cunningham, Barbara; Winkler, István
2017-01-01
In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased – i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. PMID:27421185
Wen, Teresa H; Afroz, Sonia; Reinhard, Sarah M; Palacios, Arnold R; Tapia, Kendal; Binder, Devin K; Razak, Khaleel A; Ethell, Iryna M
2017-10-13
Abnormal sensory responses associated with Fragile X Syndrome (FXS) and autism spectrum disorders include hypersensitivity and impaired habituation to repeated stimuli. Similar sensory deficits are also observed in adult Fmr1 knock-out (KO) mice and are reversed by genetic deletion of Matrix Metalloproteinase-9 (MMP-9) through yet unknown mechanisms. Here we present new evidence that impaired development of parvalbumin (PV)-expressing inhibitory interneurons may underlie hyper-responsiveness in auditory cortex of Fmr1 KO mice via MMP-9-dependent regulation of perineuronal nets (PNNs). First, we found that PV cell development and PNN formation around GABAergic interneurons were impaired in developing auditory cortex of Fmr1 KO mice. Second, MMP-9 levels were elevated in P12-P18 auditory cortex of Fmr1 KO mice and genetic reduction of MMP-9 to WT levels restored the formation of PNNs around PV cells. Third, in vivo single-unit recordings from auditory cortex neurons showed enhanced spontaneous and sound-driven responses in developing Fmr1 KO mice, which were normalized following genetic reduction of MMP-9. These findings indicate that elevated MMP-9 levels contribute to the development of sensory hypersensitivity by influencing formation of PNNs around PV interneurons suggesting MMP-9 as a new therapeutic target to reduce sensory deficits in FXS and potentially other autism spectrum disorders. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Neural stem/progenitor cell properties of glial cells in the adult mouse auditory nerve
Lang, Hainan; Xing, Yazhi; Brown, LaShardai N.; Samuvel, Devadoss J.; Panganiban, Clarisse H.; Havens, Luke T.; Balasubramanian, Sundaravadivel; Wegner, Michael; Krug, Edward L.; Barth, Jeremy L.
2015-01-01
The auditory nerve is the primary conveyor of hearing information from sensory hair cells to the brain. It has been believed that loss of the auditory nerve is irreversible in the adult mammalian ear, resulting in sensorineural hearing loss. We examined the regenerative potential of the auditory nerve in a mouse model of auditory neuropathy. Following neuronal degeneration, quiescent glial cells converted to an activated state showing a decrease in nuclear chromatin condensation, altered histone deacetylase expression and up-regulation of numerous genes associated with neurogenesis or development. Neurosphere formation assays showed that adult auditory nerves contain neural stem/progenitor cells (NSPs) that were within a Sox2-positive glial population. Production of neurospheres from auditory nerve cells was stimulated by acute neuronal injury and hypoxic conditioning. These results demonstrate that a subset of glial cells in the adult auditory nerve exhibit several characteristics of NSPs and are therefore potential targets for promoting auditory nerve regeneration. PMID:26307538
The auditory nerve overlapped waveform (ANOW): A new objective measure of low-frequency hearing
NASA Astrophysics Data System (ADS)
Lichtenhan, Jeffery T.; Salt, Alec N.; Guinan, John J.
2015-12-01
One of the most pressing problems today in the mechanics of hearing is to understand the mechanical motions in the apical half of the cochlea. Almost all available measurements from the cochlear apex of basilar membrane or other organ-of-Corti transverse motion have been made from ears where the health, or sensitivity, in the apical half of the cochlea was not known. A key step in understanding the mechanics of the cochlear base was to trust mechanical measurements only when objective measures from auditory-nerve compound action potentials (CAPs) showed good preparation sensitivity. However, such traditional objective measures are not adequate monitors of cochlear health in the very low-frequency regions of the apex that are accessible for mechanical measurements. To address this problem, we developed the Auditory Nerve Overlapped Waveform (ANOW) that originates from auditory nerve output in the apex. When responses from the round window to alternating low-frequency tones are averaged, the cochlear microphonic is canceled and phase-locked neural firing interleaves in time (i.e., overlaps). The result is a waveform that oscillates at twice the probe frequency. We have demonstrated that this Auditory Nerve Overlapped Waveform - called ANOW - originates from auditory nerve fibers in the cochlear apex [8], relates well to single-auditory-nerve-fiber thresholds, and can provide an objective estimate of low-frequency sensitivity [7]. Our new experiments demonstrate that ANOW is a highly sensitive indicator of apical cochlear function. During four different manipulations to the scala media along the cochlear spiral, ANOW amplitude changed when either no, or only small, changes occurred in CAP thresholds. Overall, our results demonstrate that ANOW can be used to monitor cochlear sensitivity of low-frequency regions during experiments that make apical basilar membrane motion measurements.
Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.
2012-01-01
We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068
Neural correlates of auditory scene analysis and perception
Cohen, Yale E.
2014-01-01
The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354
EEG signatures accompanying auditory figure-ground segregation.
Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P; Szerafin, Ágnes; Shinn-Cunningham, Barbara G; Winkler, István
2016-11-01
In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased - i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. Copyright © 2016. Published by Elsevier Inc.
Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao
2009-01-01
Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.
Enhanced auditory temporal gap detection in listeners with musical training.
Mishra, Srikanta K; Panda, Manas R; Herbert, Carolyn
2014-08-01
Many features of auditory perception are positively altered in musicians. Traditionally auditory mechanisms in musicians are investigated using the Western-classical musician model. The objective of the present study was to adopt an alternative model-Indian-classical music-to further investigate auditory temporal processing in musicians. This study presents that musicians have significantly lower across-channel gap detection thresholds compared to nonmusicians. Use of the South Indian musician model provides an increased external validity for the prediction, from studies on Western-classical musicians, that auditory temporal coding is enhanced in musicians.
P50 Suppression in Children with Selective Mutism: A Preliminary Report
ERIC Educational Resources Information Center
Henkin, Yael; Feinholz, Maya; Arie, Miri; Bar-Haim, Yair
2010-01-01
Evidence suggests that children with selective mutism (SM) display significant aberrations in auditory efferent activity at the brainstem level that may underlie inefficient auditory processing during vocalization, and lead to speech avoidance. The objective of the present study was to explore auditory filtering processes at the cortical level in…
Appachi, Swathi; Specht, Jessica L; Raol, Nikhila; Lieu, Judith E C; Cohen, Michael S; Dedhia, Kavita; Anne, Samantha
2017-10-01
Objective Options for management of unilateral hearing loss (UHL) in children include conventional hearing aids, bone-conduction hearing devices, contralateral routing of signal (CROS) aids, and frequency-modulating (FM) systems. The objective of this study was to systematically review the current literature to characterize auditory outcomes of hearing rehabilitation options in UHL. Data Sources PubMed, EMBASE, Medline, CINAHL, and Cochrane Library were searched from inception to January 2016. Manual searches of bibliographies were also performed. Review Methods Studies analyzing auditory outcomes of hearing amplification in children with UHL were included. Outcome measures included functional and objective auditory results. Two independent reviewers evaluated each abstract and article. Results Of the 249 articles identified, 12 met inclusion criteria. Seven articles solely focused on outcomes with bone-conduction hearing devices. Outcomes favored improved pure-tone averages, speech recognition thresholds, and sound localization in implanted patients. Five studies focused on FM systems, conventional hearing aids, or CROS hearing aids. Limited data are available but suggest a trend toward improvement in speech perception with hearing aids. FM systems were shown to have the most benefit for speech recognition in noise. Studies evaluating CROS hearing aids demonstrated variable outcomes. Conclusions Data evaluating functional and objective auditory measures following hearing amplification in children with UHL are limited. Most studies do suggest improvement in speech perception, speech recognition in noise, and sound localization with a hearing rehabilitation device.
Temporal binding of neural responses for focused attention in biosonar
Simmons, James A.
2014-01-01
Big brown bats emit biosonar sounds and perceive their surroundings from the delays of echoes received by the ears. Broadcasts are frequency modulated (FM) and contain two prominent harmonics sweeping from 50 to 25 kHz (FM1) and from 100 to 50 kHz (FM2). Individual frequencies in each broadcast and each echo evoke single-spike auditory responses. Echo delay is encoded by the time elapsed between volleys of responses to broadcasts and volleys of responses to echoes. If echoes have the same spectrum as broadcasts, the volley of neural responses to FM1 and FM2 is internally synchronized for each sound, which leads to sharply focused delay images. Because of amplitude–latency trading, disruption of response synchrony within the volleys occurs if the echoes are lowpass filtered, leading to blurred, defocused delay images. This effect is consistent with the temporal binding hypothesis for perceptual image formation. Bats perform inexplicably well in cluttered surroundings where echoes from off-side objects ought to cause masking. Off-side echoes are lowpass filtered because of the shape of the broadcast beam, and they evoke desynchronized auditory responses. The resulting defocused images of clutter do not mask perception of focused images for targets. Neural response synchronization may select a target to be the focus of attention, while desynchronization may impose inattention on the surroundings by defocusing perception of clutter. The formation of focused biosonar images from synchronized neural responses, and the defocusing that occurs with disruption of synchrony, quantitatively demonstrates how temporal binding may control attention and bring a perceptual object into existence. PMID:25122915
Holmes, Nicholas P; Dakwar, Azar R
2015-12-01
Movements aimed towards objects occasionally have to be adjusted when the object moves. These online adjustments can be very rapid, occurring in as little as 100ms. More is known about the latency and neural basis of online control of movements to visual than to auditory target objects. We examined the latency of online corrections in reaching-to-point movements to visual and auditory targets that could change side and/or modality at movement onset. Visual or auditory targets were presented on the left or right sides, and participants were instructed to reach and point to them as quickly and as accurately as possible. On half of the trials, the targets changed side at movement onset, and participants had to correct their movements to point to the new target location as quickly as possible. Given different published approaches to measuring the latency for initiating movement corrections, we examined several different methods systematically. What we describe here as the optimal methods involved fitting a straight-line model to the velocity of the correction movement, rather than using a statistical criterion to determine correction onset. In the multimodal experiment, these model-fitting methods produced significantly lower latencies for correcting movements away from the auditory targets than away from the visual targets. Our results confirm that rapid online correction is possible for auditory targets, but further work is required to determine whether the underlying control system for reaching and pointing movements is the same for auditory and visual targets. Copyright © 2015 Elsevier Ltd. All rights reserved.
Di Bonito, Maria; Studer, Michèle
2017-01-01
During development, the organization of the auditory system into distinct functional subcircuits depends on the spatially and temporally ordered sequence of neuronal specification, differentiation, migration and connectivity. Regional patterning along the antero-posterior axis and neuronal subtype specification along the dorso-ventral axis intersect to determine proper neuronal fate and assembly of rhombomere-specific auditory subcircuits. By taking advantage of the increasing number of transgenic mouse lines, recent studies have expanded the knowledge of developmental mechanisms involved in the formation and refinement of the auditory system. Here, we summarize several findings dealing with the molecular and cellular mechanisms that underlie the assembly of central auditory subcircuits during mouse development, focusing primarily on the rhombomeric and dorso-ventral origin of auditory nuclei and their associated molecular genetic pathways. PMID:28469562
Neuronal plasticity and multisensory integration in filial imprinting.
Town, Stephen Michael; McCabe, Brian John
2011-03-10
Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus.
Neuronal Plasticity and Multisensory Integration in Filial Imprinting
Town, Stephen Michael; McCabe, Brian John
2011-01-01
Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus. PMID:21423770
Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss
ERIC Educational Resources Information Center
Koravand, Amineh; Jutras, Benoit
2013-01-01
Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…
2013-07-02
amygdala induced by hippocampal formation stimulation in vivo. The Journal of neuroscience: the official journal of the Society for Neuroscience 15...6 Figure 1.3. Schematic model of the neural circuitry of Pavlovian auditory fear conditioning. Model shows how an auditory conditioned...stimulus and a nociceptive unconditioned foot shock stimulus converge in the lateral amygdala (LA) via auditory thalamus and cortex and somatosensory
Memory for sound, with an ear toward hearing in complex auditory scenes.
Snyder, Joel S; Gregg, Melissa K
2011-10-01
An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.
Early and late beta-band power reflect audiovisual perception in the McGurk illusion
Senkowski, Daniel; Keil, Julian
2015-01-01
The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13–30 Hz) at short (0–500 ms) and long (500–800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. PMID:25568160
Early and late beta-band power reflect audiovisual perception in the McGurk illusion.
Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian
2015-04-01
The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. Copyright © 2015 the American Physiological Society.
Brugeaud, Aurore; Tong, Mingjie; Luo, Li; Edge, Albert S.B.
2017-01-01
The peripheral fibers that extend from auditory neurons to hair cells are sensitive to damage, and replacement of the fibers and their afferent synapse with hair cells would be of therapeutic interest. Here, we show that RGMa, a repulsive guidance molecule previously shown to play a role in the development of the chick visual system, is expressed in the developing, newborn, and mature mouse inner ear. The effect of RGMa on synaptogenesis between afferent neurons and hair cells, from which afferent connections had been removed, was assessed. Contact of neural processes with hair cells and elaboration of postsynaptic densities at sites of the ribbon synapse were increased by treatment with a blocking antibody to RGMa, and pruning of auditory fibers to achieve the mature branching pattern of afferent neurons was accelerated. Inhibition by RGMa could thus explain why auditory neurons have a low capacity to regenerate peripheral processes: postnatal spiral ganglion neurons retain the capacity to send out processes that respond to signals for synapse formation, but expression of RGMa postnatally appears to be detrimental to regeneration of afferent hair cell innervation and antagonizes synaptogenesis. Increased synaptogenesis after inhibition of RGMa suggests that manipulation of guidance or inhibitory factors may provide a route to increase formation of new synapses at deafferented hair cells. PMID:24123853
2015-11-30
Assessments for Efficient Evaluation of Auditory Situation Awareness Characteristics of Tactical Communications and Protective Systems (TCAPS) and Augmented...Hearing Protective Devices (HPDs) W81XWH-13-C-0193 John G. Casali, Ph.D, CPE & Kichol Lee, Ph.D Auditory Systems Lab, Industrial and Systems ...Suite 1 JBSA Lackland, TX 78236-9908 Approved for public release: distribution unlimited. The Virginia Tech Auditory Systems Laboratory (ASL
The Perception of Concurrent Sound Objects in Harmonic Complexes Impairs Gap Detection
ERIC Educational Resources Information Center
Leung, Ada W. S.; Jolicoeur, Pierre; Vachon, Francois; Alain, Claude
2011-01-01
Since the introduction of the concept of auditory scene analysis, there has been a paucity of work focusing on the theoretical explanation of how attention is allocated within a complex auditory scene. Here we examined signal detection in situations that promote either the fusion of tonal elements into a single sound object or the segregation of a…
Psychophysical and Neural Correlates of Auditory Attraction and Aversion
NASA Astrophysics Data System (ADS)
Patten, Kristopher Jakob
This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids auditory parsing and functional representation of acoustic objects and was found to be a principal feature of pleasing auditory stimuli.
A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion
Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon
2012-01-01
The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322
Acoustic facilitation of object movement detection during self-motion
Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.
2011-01-01
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050
The relationship between auditory exostoses and cold water: a latitudinal analysis.
Kennedy, G E
1986-12-01
The frequency of auditory exostoses was examined by latitude. It was found that discrete bony lesions of the external auditory canal were, with very few exceptions, either absent or in very low frequency (less than 3.0%) in 0-30 degrees N and S latitudes and above 45 degrees N. The highest frequencies of auditory exostoses were found in the middle latitudes (30-45 degrees N and S) among populations who exploit either marine or fresh water resources. Clinical and experimental data are discussed, and these data are found to support strongly the hypothesis that there is a causative relationship between the formation of auditory exostoses and exploitation of resources in cold water, particularly through diving. It is therefore suggested that since auditory exostoses are behavioral rather than genetic in etiology, they should not be included in estimates of population distance based on nonmetric variables.
Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence
2017-09-25
At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.
Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal
2016-01-01
Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.
ERIC Educational Resources Information Center
Sullivan, Jessica R.; Osman, Homira; Schafer, Erin C.
2015-01-01
Purpose: The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Method: Children with normal hearing between the ages…
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.
Stone, Scott A; Tata, Matthew S
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality
Tata, Matthew S.
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518
Modulation frequency as a cue for auditory speed perception.
Senna, Irene; Parise, Cesare V; Ernst, Marc O
2017-07-12
Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).
2005-01-01
format. A “yes” response to any of the auditory hallucinations questions (i.e., have you ever heard voices ?; do frequently hear things that other...disorganized thought processes in the form of delusion, hallucinations (e.g., visual and auditory ). Such symptomotology may have inhibited the prospective...disorder or endorse auditory or visual hallucinations by responding “yes” to phone screen questions were excluded (see Appendix B, questions 14A, 14A1
Broadened population-level frequency tuning in the auditory cortex of tinnitus patients.
Sekiya, Kenichi; Takahashi, Mariko; Murakami, Shingo; Kakigi, Ryusuke; Okamoto, Hidehiko
2017-03-01
Tinnitus is a phantom auditory perception without an external sound source and is one of the most common public health concerns that impair the quality of life of many individuals. However, its neural mechanisms remain unclear. We herein examined population-level frequency tuning in the auditory cortex of unilateral tinnitus patients with similar hearing levels in both ears using magnetoencephalography. We compared auditory-evoked neural activities elicited by a stimulation to the tinnitus and nontinnitus ears. Objective magnetoencephalographic data suggested that population-level frequency tuning corresponding to the tinnitus ear was significantly broader than that corresponding to the nontinnitus ear in the human auditory cortex. The results obtained support the hypothesis that pathological alterations in inhibitory neural networks play an important role in the perception of subjective tinnitus. NEW & NOTEWORTHY Although subjective tinnitus is one of the most common public health concerns that impair the quality of life of many individuals, no standard treatment or objective diagnostic method currently exists. We herein revealed that population-level frequency tuning was significantly broader in the tinnitus ear than in the nontinnitus ear. The results of the present study provide an insight into the development of an objective diagnostic method for subjective tinnitus. Copyright © 2017 the American Physiological Society.
Developmental changes in distinguishing concurrent auditory objects.
Alain, Claude; Theunissen, Eef L; Chevalier, Hélène; Batty, Magali; Taylor, Margot J
2003-04-01
Children have considerable difficulties in identifying speech in noise. In the present study, we examined age-related differences in central auditory functions that are crucial for parsing co-occurring auditory events using behavioral and event-related brain potential measures. Seventeen pre-adolescent children and 17 adults were presented with complex sounds containing multiple harmonics, one of which could be 'mistuned' so that it was no longer an integer multiple of the fundamental. Both children and adults were more likely to report hearing the mistuned harmonic as a separate sound with an increase in mistuning. However, children were less sensitive in detecting mistuning across all levels as revealed by lower d' scores than adults. The perception of two concurrent auditory events was accompanied by a negative wave that peaked at about 160 ms after sound onset. In both age groups, the negative wave, referred to as the 'object-related negativity' (ORN), increased in amplitude with mistuning. The ORN was larger in children than in adults despite a lower d' score. Together, the behavioral and electrophysiological results suggest that concurrent sound segregation is probably adult-like in pre-adolescent children, but that children are inefficient in processing the information following the detection of mistuning. These findings also suggest that processes involved in distinguishing concurrent auditory objects continue to mature during adolescence.
Hierarchical Processing of Auditory Objects in Humans
Kumar, Sukhbinder; Stephan, Klaas E; Warren, Jason D; Friston, Karl J; Griffiths, Timothy D
2007-01-01
This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG), containing the primary auditory cortex, planum temporale (PT), and superior temporal sulcus (STS), and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal “templates” in the PT before further analysis of the abstracted form in anterior temporal lobe areas. PMID:17542641
Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam
2013-01-01
Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task i nterruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. PMID:23791629
Alba-Ferrara, Lucy; Fernyhough, Charles; Weis, Susanne; Mitchell, Rachel L C; Hausmann, Markus
2012-06-01
Deficits in emotional processing have been widely described in schizophrenia. Associations of positive symptoms with poor emotional prosody comprehension (EPC) have been reported at the phenomenological, behavioral, and neural levels. This review focuses on the relation between emotional processing deficits and auditory verbal hallucinations (AVH). We explore the possibility that the relation between AVH and EPC in schizophrenia might be mediated by the disruption of a common mechanism intrinsic to auditory processing, and that, moreover, prosodic feature processing deficits play a pivotal role in the formation of AVH. The review concludes with proposing a mechanism by which AVH are constituted and showing how different aspects of our neuropsychological model can explain the constellation of subjective experiences which occur in relation to AVH. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.
Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D
2018-06-01
The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants <18, nonhuman participants, and article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as neurophysiological substrates for auditory prediction. Tinnitus has been modeled as an auditory object which may demonstrate incomplete processing during auditory scene analysis resulting in tinnitus salience and therefore difficulty in habituation. Within the electrophysiological domain, there is currently mixed evidence regarding oscillatory band changes in tinnitus. There are theoretical proposals for a relationship between prediction error and tinnitus but few published empirical studies. American Academy of Audiology.
ERIC Educational Resources Information Center
Carlin, Michael; Toglia, Michael P.; Belmonte, Colleen; DiMeglio, Chiara
2012-01-01
In the present study the effects of visual, auditory, and audio-visual presentation formats on memory for thematically constructed lists were assessed in individuals with intellectual disability and mental age-matched children. The auditory recognition test included target items, unrelated foils, and two types of semantic lures: critical related…
ERIC Educational Resources Information Center
Nittrouer, Susan; Shune, Samantha; Lowenstein, Joanna H.
2011-01-01
Although children with language impairments, including those associated with reading, usually demonstrate deficits in phonological processing, there is minimal agreement as to the source of those deficits. This study examined two problems hypothesized to be possible sources: either poor auditory sensitivity to speech-relevant acoustic properties,…
Sound effects: Multimodal input helps infants find displaced objects.
Shinskey, Jeanne L
2017-09-01
Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is displaced to B, especially following a delay between hiding and retrieval. Experiment 1 manipulated auditory input by keeping the hidden object audible versus silent, and visual input by presenting the delay in the light versus dark. Infants succeeded more at B with audible than silent objects and, unexpectedly, more after delays in the light than dark. Experiment 2 presented both the delay and search phases in darkness. The unexpected light-dark difference disappeared. Across experiments, the presence of auditory input helped infants find displaced objects, whereas the absence of visual input did not. Sound might help by strengthening object representation, reducing memory load, or focusing attention. This work provides new evidence on when bimodal input aids object processing, corroborates claims that audiovisual processing improves over the first year of life, and contributes to multisensory approaches to studying cognition. Statement of contribution What is already known on this subject Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion. This suggests they find auditory input more salient in the absence of visual input in simple search tasks. After 9 months, infants' object processing appears more sensitive to multimodal (e.g., audiovisual) input. What does this study add? This study tested how audiovisual input affects 10-month-olds' search for an object displaced in an AB task. Sound helped infants find displaced objects in both the presence and absence of visual input. Object processing becomes more sensitive to bimodal input as multisensory functions develop across the first year. © 2016 The British Psychological Society.
Glycinergic Pathways of the Central Auditory System and Adjacent Reticular Formation of the Rat.
NASA Astrophysics Data System (ADS)
Hunter, Chyren
The development of techniques to visualize and identify specific transmitters of neuronal circuits has stimulated work on the characterization of pathways in the rat central nervous system that utilize the inhibitory amino acid glycine as its neurotransmitter. Glycine is a major inhibitory transmitter in the spinal cord and brainstem of vertebrates where it satisfies the major criteria for neurotransmitter action. Some of these characteristics are: uneven distribution in brain, high affinity reuptake mechanisms, inhibitory neurophysiological actions on certain neuronal populations, uneven receptor distribution and the specific antagonism of its actions by the convulsant alkaloid strychnine. Behaviorally, antagonism of glycinergic neurotransmission in the medullary reticular formation is linked to the development of myoclonus and seizures which may be initiated by auditory as well as other stimuli. In the present study, decreases in the concentration of glycine as well as the density of glycine receptors in the medulla with aging were found and may be responsible for the lowered threshold for strychnine seizures observed in older rats. Neuroanatomical pathways in the central auditory system and medullary and pontine reticular formation (RF) were investigated using retrograde transport of tritiated glycine to identify glycinergic pathways; immunohistochemical techniques were used to corroborate the location of glycine neurons. Within the central auditory system, retrograde transport studies using tritiated glycine demonstrated an ipsilateral glycinergic pathway linking nuclei of the ascending auditory system. This pathway has its cell bodies in the medial nucleus of the trapezoid body (MNTB) and projects to the ventrocaudal division of the ventral nucleus of the lateral lemniscus (VLL). Collaterals of this glycinergic projection terminate in the ipsilateral lateral superior olive (LSO). Other glycinergic pathways found were afferent to the VLL and have their origin in the ventral and lateral nuclei of the trapezoid body (MVPO and LVPO). Bilateral projections from the nucleus reticularis pontis oralis (RPOo), to the VLL were also identified as glycinergic. This projection may link motor output systems to ascending auditory input, generating the auditory behavioral responses seen with glycine antagonism in animal models of myoclonus and seizure.
Guinchard, A-C; Ghazaleh, Naghmeh; Saenz, M; Fornari, E; Prior, J O; Maeder, P; Adib, S; Maire, R
2016-11-01
We studied possible brain changes with functional MRI (fMRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) in a patient with a rare, high-intensity "objective tinnitus" (high-level SOAEs) in the left ear of 10 years duration, with no associated hearing loss. This is the first case of objective cochlear tinnitus to be investigated with functional neuroimaging. The objective cochlear tinnitus was measured by Spontaneous Otoacoustic Emissions (SOAE) equipment (frequency 9689 Hz, intensity 57 dB SPL) and is clearly audible to anyone standing near the patient. Functional modifications in primary auditory areas and other brain regions were evaluated using 3T and 7T fMRI and FDG-PET. In the fMRI evaluations, a saturation of the auditory cortex at the tinnitus frequency was observed, but the global cortical tonotopic organization remained intact when compared to the results of fMRI of healthy subjects. The FDG-PET showed no evidence of an increase or decrease of activity in the auditory cortices or in the limbic system as compared to normal subjects. In this patient with high-intensity objective cochlear tinnitus, fMRI and FDG-PET showed no significant brain reorganization in auditory areas and/or in the limbic system, as reported in the literature in patients with chronic subjective tinnitus. Copyright © 2016 Elsevier B.V. All rights reserved.
The perception of coherent and non-coherent auditory objects: a signature in gamma frequency band.
Knief, A; Schulte, M; Bertran, O; Pantev, C
2000-07-01
The pertinence of gamma band activity in magnetoencephalographic and electroencephalographic recordings for the performance of a gestalt recognition process is a question at issue. We investigated the functional relevance of gamma band activity for the perception of auditory objects. An auditory experiment was performed as an analog to the Kanizsa experiment in the visual modality, comprising four different coherent and non-coherent stimuli. For the first time functional differences of evoked gamma band activity due to the perception of these stimuli were demonstrated by various methods (localization of sources, wavelet analysis and independent component analysis, ICA). Responses to coherent stimuli were found to have more features in common compared to non-coherent stimuli (e.g. closer located sources and smaller number of ICA components). The results point to the existence of a pitch processor in the auditory pathway.
The effect of background music on the taste of wine.
North, Adrian C
2012-08-01
Research concerning cross-modal influences on perception has neglected auditory influences on perceptions of non-auditory objects, although a small number of studies indicate that auditory stimuli can influence perceptions of the freshness of foodstuffs. Consistent with this, the results reported here indicate that independent groups' ratings of the taste of the wine reflected the emotional connotations of the background music played while they drank it. These results indicate that the symbolic function of auditory stimuli (in this case music) may influence perception in other modalities (in this case gustation); and are discussed in terms of possible future research that might investigate those aspects of music that induce such effects in a particular manner, and how such effects might be influenced by participants' pre-existing knowledge and expertise with regard to the target object in question. ©2011 The British Psychological Society.
Auditory pathways: anatomy and physiology.
Pickles, James O
2015-01-01
This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.
Slevc, L Robert; Shell, Alison R
2015-01-01
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Development of the auditory system
Litovsky, Ruth
2015-01-01
Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262
Abdollahi fakhim, Shahin; Naderpoor, Masoud; Mousaviagdas, Mehrnoosh
2014-01-01
Introduction: First branchial cleft anomalies manifest with duplication of the external auditory canal. Case Report: This report features a rare case of microtia and congenital middle ear and canal cholesteatoma with first branchial fistula. External auditory canal stenosis was complicated by middle ear and external canal cholesteatoma, but branchial fistula, opening in the zygomatic root and a sinus in the helical root, may explain this feature. A canal wall down mastoidectomy with canaloplasty and wide meatoplasty was performed. The branchial cleft was excised through parotidectomy and facial nerve dissection. Conclusion: It should be considered that canal stenosis in such cases can induce cholesteatoma formation in the auditory canal and middle ear. PMID:25320705
Abdollahi Fakhim, Shahin; Naderpoor, Masoud; Mousaviagdas, Mehrnoosh
2014-10-01
First branchial cleft anomalies manifest with duplication of the external auditory canal. This report features a rare case of microtia and congenital middle ear and canal cholesteatoma with first branchial fistula. External auditory canal stenosis was complicated by middle ear and external canal cholesteatoma, but branchial fistula, opening in the zygomatic root and a sinus in the helical root, may explain this feature. A canal wall down mastoidectomy with canaloplasty and wide meatoplasty was performed. The branchial cleft was excised through parotidectomy and facial nerve dissection. It should be considered that canal stenosis in such cases can induce cholesteatoma formation in the auditory canal and middle ear.
Temporal binding of neural responses for focused attention in biosonar.
Simmons, James A
2014-08-15
Big brown bats emit biosonar sounds and perceive their surroundings from the delays of echoes received by the ears. Broadcasts are frequency modulated (FM) and contain two prominent harmonics sweeping from 50 to 25 kHz (FM1) and from 100 to 50 kHz (FM2). Individual frequencies in each broadcast and each echo evoke single-spike auditory responses. Echo delay is encoded by the time elapsed between volleys of responses to broadcasts and volleys of responses to echoes. If echoes have the same spectrum as broadcasts, the volley of neural responses to FM1 and FM2 is internally synchronized for each sound, which leads to sharply focused delay images. Because of amplitude-latency trading, disruption of response synchrony within the volleys occurs if the echoes are lowpass filtered, leading to blurred, defocused delay images. This effect is consistent with the temporal binding hypothesis for perceptual image formation. Bats perform inexplicably well in cluttered surroundings where echoes from off-side objects ought to cause masking. Off-side echoes are lowpass filtered because of the shape of the broadcast beam, and they evoke desynchronized auditory responses. The resulting defocused images of clutter do not mask perception of focused images for targets. Neural response synchronization may select a target to be the focus of attention, while desynchronization may impose inattention on the surroundings by defocusing perception of clutter. The formation of focused biosonar images from synchronized neural responses, and the defocusing that occurs with disruption of synchrony, quantitatively demonstrates how temporal binding may control attention and bring a perceptual object into existence. © 2014. Published by The Company of Biologists Ltd.
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.
1991-01-01
The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.
NASA Astrophysics Data System (ADS)
Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.
1991-03-01
The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.
Leite, Renata Aparecida; Magliaro, Fernanda Cristina Leite; Raimundo, Jeziela Cristina; Bento, Ricardo Ferreira; Matas, Carla Gentile
2018-01-01
OBJECTIVE: The objective of this study was to compare long-latency auditory evoked potentials before and after hearing aid fittings in children with sensorineural hearing loss compared with age-matched children with normal hearing. METHODS: Thirty-two subjects of both genders aged 7 to 12 years participated in this study and were divided into two groups as follows: 14 children with normal hearing were assigned to the control group (mean age 9 years and 8 months), and 18 children with mild to moderate symmetrical bilateral sensorineural hearing loss were assigned to the study group (mean age 9 years and 2 months). The children underwent tympanometry, pure tone and speech audiometry and long-latency auditory evoked potential testing with speech and tone burst stimuli. The groups were assessed at three time points. RESULTS: The study group had a lower percentage of positive responses, lower P1-N1 and P2-N2 amplitudes (speech and tone burst), and increased latencies for the P1 and P300 components following the tone burst stimuli. They also showed improvements in long-latency auditory evoked potentials (with regard to both the amplitude and presence of responses) after hearing aid use. CONCLUSIONS: Alterations in the central auditory pathways can be identified using P1-N1 and P2-N2 amplitude components, and the presence of these components increases after a short period of auditory stimulation (hearing aid use). These findings emphasize the importance of using these amplitude components to monitor the neuroplasticity of the central auditory nervous system in hearing aid users. PMID:29466495
Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies
2015-12-01
Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. Copyright © 2015 Elsevier Ltd. All rights reserved.
Elevated audiovisual temporal interaction in patients with migraine without aura
2014-01-01
Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903
Tinnitus Intensity Dependent Gamma Oscillations of the Contralateral Auditory Cortex
van der Loo, Elsa; Gais, Steffen; Congedo, Marco; Vanneste, Sven; Plazier, Mark; Menovsky, Tomas; Van de Heyning, Paul; De Ridder, Dirk
2009-01-01
Background Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. Methods and Findings In unilateral tinnitus patients (N = 15; 10 right, 5 left) source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05). Conclusion Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception. PMID:19816597
Shang, Andrea; Bylipudi, Sooraz; Bieszczad, Kasia M
2018-05-31
Epigenetic mechanisms are key for regulating long-term memory (LTM) and are known to exert control on memory formation in multiple systems of the adult brain, including the sensory cortex. One epigenetic mechanism is chromatin modification by histone acetylation. Blocking the action of histone de-acetylases (HDACs) that normally negatively regulate LTM by repressing transcription has been shown to enable memory formation. Indeed, HDAC inhibition appears to facilitate memory by altering the dynamics of gene expression events important for memory consolidation. However, less understood are the ways in which molecular-level consolidation processes alter subsequent memory to enhance storage or facilitate retrieval. Here we used a sensory perspective to investigate whether the characteristics of memory formed with HDAC inhibitors are different from naturally-formed memory. One possibility is that HDAC inhibition enables memory to form with greater sensory detail than normal. Because the auditory system undergoes learning-induced remodeling that provides substrates for sound-specific LTM, we aimed to identify behavioral effects of HDAC inhibition on memory for specific sound features using a standard model of auditory associative cue-reward learning, memory, and cortical plasticity. We found that three systemic post-training treatments of an HDAC3-inhibitor (RGPF966, Abcam Inc.) in rats in the early phase of training facilitated auditory discriminative learning, changed auditory cortical tuning, and increased the specificity for acoustic frequency formed in memory of both excitatory (S+) and inhibitory (S-) associations for at least 2 weeks. The findings support that epigenetic mechanisms act on neural and behavioral sensory acuity to increase the precision of associative cue memory, which can be revealed by studying the sensory characteristics of long-term associative memory formation with HDAC inhibitors. Published by Elsevier B.V.
Options for Auditory Training for Adults with Hearing Loss.
Olson, Anne D
2015-11-01
Hearing aid devices alone do not adequately compensate for sensory losses despite significant technological advances in digital technology. Overall use rates of amplification among adults with hearing loss remain low, and overall satisfaction and performance in noise can be improved. Although improved technology may partially address some listening problems, auditory training may be another alternative to improve speech recognition in noise and satisfaction with devices. The literature underlying auditory plasticity following placement of sensory devices suggests that additional auditory training may be needed for reorganization of the brain to occur. Furthermore, training may be required to acquire optimal performance from devices. Several auditory training programs that are readily accessible for adults with hearing loss, hearing aids, or cochlear implants are described. Programs that can be accessed via Web-based formats and smartphone technology are reviewed. A summary table is provided for easy access to programs with descriptions of features that allow hearing health care providers to assist clients in selecting the most appropriate auditory training program to fit their needs.
Selective synaptic remodeling of amygdalocortical connections associated with fear memory.
Yang, Yang; Liu, Dan-Qian; Huang, Wei; Deng, Juan; Sun, Yangang; Zuo, Yi; Poo, Mu-Ming
2016-10-01
Neural circuits underlying auditory fear conditioning have been extensively studied. Here we identified a previously unexplored pathway from the lateral amygdala (LA) to the auditory cortex (ACx) and found that selective silencing of this pathway using chemo- and optogenetic approaches impaired fear memory retrieval. Dual-color in vivo two-photon imaging of mouse ACx showed pathway-specific increases in the formation of LA axon boutons, dendritic spines of ACx layer 5 pyramidal cells, and putative LA-ACx synaptic pairs after auditory fear conditioning. Furthermore, joint imaging of pre- and postsynaptic structures showed that essentially all new synaptic contacts were made by adding new partners to existing synaptic elements. Together, these findings identify an amygdalocortical projection that is important to fear memory expression and is selectively modified by associative fear learning, and unravel a distinct architectural rule for synapse formation in the adult brain.
Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children.
Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie
2016-01-01
Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.
Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children
Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie
2016-01-01
Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10–40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89–98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442
Sleifer, Pricila; Didoné, Dayane Domeneghini; Keppeler, Ísis Bicca; Bueno, Claudine Devicari; Riesgo, Rudimar dos Santos
2017-01-01
Introduction The tone-evoked auditory brainstem responses (tone-ABR) enable the differential diagnosis in the evaluation of children until 12 months of age, including those with external and/or middle ear malformations. The use of auditory stimuli with frequency specificity by air and bone conduction allows characterization of hearing profile. Objective The objective of our study was to compare the results obtained in tone-ABR by air and bone conduction in children until 12 months, with agenesis of the external auditory canal. Method The study was cross-sectional, observational, individual, and contemporary. We conducted the research with tone-ABR by air and bone conduction in the frequencies of 500 Hz and 2000 Hz in 32 children, 23 boys, from one to 12 months old, with agenesis of the external auditory canal. Results The tone-ABR thresholds were significantly elevated for air conduction in the frequencies of 500 Hz and 2000 Hz, while the thresholds of bone conduction had normal values in both ears. We found no statistically significant difference between genders and ears for most of the comparisons. Conclusion The thresholds obtained by bone conduction did not alter the thresholds in children with conductive hearing loss. However, the conductive hearing loss alter all thresholds by air conduction. The tone-ABR by bone conduction is an important tool for assessing cochlear integrity in children with agenesis of the external auditory canal under 12 months. PMID:29018492
Takegata, Rika; Brattico, Elvira; Tervaniemi, Mari; Varyagina, Olga; Näätänen, Risto; Winkler, István
2005-09-01
The role of attention in conjoining features of an object has been a topic of much debate. Studies using the mismatch negativity (MMN), an index of detecting acoustic deviance, suggested that the conjunctions of auditory features are preattentively represented in the brain. These studies, however, used sequentially presented sounds and thus are not directly comparable with visual studies of feature integration. Therefore, the current study presented an array of spatially distributed sounds to determine whether the auditory features of concurrent sounds are correctly conjoined without focal attention directed to the sounds. Two types of sounds differing from each other in timbre and pitch were repeatedly presented together while subjects were engaged in a visual n-back working-memory task and ignored the sounds. Occasional reversals of the frequent pitch-timbre combinations elicited MMNs of a very similar amplitude and latency irrespective of the task load. This result suggested preattentive integration of auditory features. However, performance in a subsequent target-search task with the same stimuli indicated the occurrence of illusory conjunctions. The discrepancy between the results obtained with and without focal attention suggests that illusory conjunctions may occur during voluntary access to the preattentively encoded object representations.
Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E
2008-10-21
Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.
Using Facebook to Reach People Who Experience Auditory Hallucinations
Brian, Rachel Marie; Ben-Zeev, Dror
2016-01-01
Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. Methods We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Results Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience auditory hallucinations. Women, people who use mobile phones, and those experiencing more distress, were reportedly more open to using Facebook as a support and/or therapeutic tool in the future. Conclusions Facebook advertisements can be used to recruit research participants who experience auditory hallucinations quickly and in a cost-effective manner. Most (58%) Web-based respondents are open to Facebook-based support and treatment and are willing to describe their subjective experiences with auditory hallucinations. PMID:27302017
Sound localization by echolocating bats
NASA Astrophysics Data System (ADS)
Aytekin, Murat
Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.
Multivariate sensitivity to voice during auditory categorization.
Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard
2015-09-01
Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.
Tinnitus What and Where: An Ecological Framework
Searchfield, Grant D.
2014-01-01
Tinnitus is an interaction of the environment, cognition, and plasticity. The connection between the individual with tinnitus and their world seldom receives attention in neurophysiological research. As well as changes in cell excitability, an individual’s culture and beliefs, and work and social environs may all influence how tinnitus is perceived. In this review, an ecological framework for current neurophysiological evidence is considered. The model defines tinnitus as the perception of an auditory object in the absence of an acoustic event. It is hypothesized that following deafferentation: adaptive feature extraction, schema, and semantic object formation processes lead to tinnitus in a manner predicted by Adaptation Level Theory (1, 2). Evidence from physiological studies is compared to the tenants of the proposed ecological model. The consideration of diverse events within an ecological context may unite seemingly disparate neurophysiological models. PMID:25566177
Brown, David; Macpherson, Tom; Ward, Jamie
2011-01-01
Sensory substitution devices convert live visual images into auditory signals, for example with a web camera (to record the images), a computer (to perform the conversion) and headphones (to listen to the sounds). In a series of three experiments, the performance of one such device ('The vOICe') was assessed under various conditions on blindfolded sighted participants. The main task that we used involved identifying and locating objects placed on a table by holding a webcam (like a flashlight) or wearing it on the head (like a miner's light). Identifying objects on a table was easier with a hand-held device, but locating the objects was easier with a head-mounted device. Brightness converted into loudness was less effective than the reverse contrast (dark being loud), suggesting that performance under these conditions (natural indoor lighting, novice users) is related more to the properties of the auditory signal (ie the amount of noise in it) than the cross-modal association between loudness and brightness. Individual differences in musical memory (detecting pitch changes in two sequences of notes) was related to the time taken to identify or recognise objects, but individual differences in self-reported vividness of visual imagery did not reliably predict performance across the experiments. In general, the results suggest that the auditory characteristics of the device may be more important for initial learning than visual associations.
"Change deafness" arising from inter-feature masking within a single auditory object.
Barascud, Nicolas; Griffiths, Timothy D; McAlpine, David; Chait, Maria
2014-03-01
Our ability to detect prominent changes in complex acoustic scenes depends not only on the ear's sensitivity but also on the capacity of the brain to process competing incoming information. Here, employing a combination of psychophysics and magnetoencephalography (MEG), we investigate listeners' sensitivity in situations when two features belonging to the same auditory object change in close succession. The auditory object under investigation is a sequence of tone pips characterized by a regularly repeating frequency pattern. Signals consisted of an initial, regularly alternating sequence of three short (60 msec) pure tone pips (in the form ABCABC…) followed by a long pure tone with a frequency that is either expected based on the on-going regular pattern ("LONG expected"-i.e., "LONG-expected") or constitutes a pattern violation ("LONG-unexpected"). The change in LONG-expected is manifest as a change in duration (when the long pure tone exceeds the established duration of a tone pip), whereas the change in LONG-unexpected is manifest as a change in both the frequency pattern and a change in the duration. Our results reveal a form of "change deafness," in that although changes in both the frequency pattern and the expected duration appear to be processed effectively by the auditory system-cortical signatures of both changes are evident in the MEG data-listeners often fail to detect changes in the frequency pattern when that change is closely followed by a change in duration. By systematically manipulating the properties of the changing features and measuring behavioral and MEG responses, we demonstrate that feature changes within the same auditory object, which occur close together in time, appear to compete for perceptual resources.
NASA Astrophysics Data System (ADS)
Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica
2005-12-01
This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.
Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru
2016-01-01
The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060
Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru
2017-03-01
The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.
Macrophage-Mediated Glial Cell Elimination in the Postnatal Mouse Cochlea
Brown, LaShardai N.; Xing, Yazhi; Noble, Kenyaria V.; Barth, Jeremy L.; Panganiban, Clarisse H.; Smythe, Nancy M.; Bridges, Mary C.; Zhu, Juhong; Lang, Hainan
2017-01-01
Hearing relies on the transmission of auditory information from sensory hair cells (HCs) to the brain through the auditory nerve. This relay of information requires HCs to be innervated by spiral ganglion neurons (SGNs) in an exclusive manner and SGNs to be ensheathed by myelinating and non-myelinating glial cells. In the developing auditory nerve, mistargeted SGN axons are retracted or pruned and excessive cells are cleared in a process referred to as nerve refinement. Whether auditory glial cells are eliminated during auditory nerve refinement is unknown. Using early postnatal mice of either sex, we show that glial cell numbers decrease after the first postnatal week, corresponding temporally with nerve refinement in the developing auditory nerve. Additionally, expression of immune-related genes was upregulated and macrophage numbers increase in a manner coinciding with the reduction of glial cell numbers. Transient depletion of macrophages during early auditory nerve development, using transgenic CD11bDTR/EGFP mice, resulted in the appearance of excessive glial cells. Macrophage depletion caused abnormalities in myelin formation and transient edema of the stria vascularis. Macrophage-depleted mice also showed auditory function impairment that partially recovered in adulthood. These findings demonstrate that macrophages contribute to the regulation of glial cell number during postnatal development of the cochlea and that glial cells play a critical role in hearing onset and auditory nerve maturation. PMID:29375297
Auditory Temporal Resolution in Individuals with Diabetes Mellitus Type 2.
Mishra, Rajkishor; Sanju, Himanshu Kumar; Kumar, Prawin
2016-10-01
Introduction "Diabetes mellitus is a group of metabolic disorders characterized by elevated blood sugar and abnormalities in insulin secretion and action" (American Diabetes Association). Previous literature has reported connection between diabetes mellitus and hearing impairment. There is a dearth of literature on auditory temporal resolution ability in individuals with diabetes mellitus type 2. Objective The main objective of the present study was to assess auditory temporal resolution ability through GDT (Gap Detection Threshold) in individuals with diabetes mellitus type 2 with high frequency hearing loss. Methods Fifteen subjects with diabetes mellitus type 2 with high frequency hearing loss in the age range of 30 to 40 years participated in the study as the experimental group. Fifteen age-matched non-diabetic individuals with normal hearing served as the control group. We administered the Gap Detection Threshold (GDT) test to all participants to assess their temporal resolution ability. Result We used the independent t -test to compare between groups. Results showed that the diabetic group (experimental) performed significantly poorer compared with the non-diabetic group (control). Conclusion It is possible to conclude that widening of auditory filters and changes in the central auditory nervous system contributed to poorer performance for temporal resolution task (Gap Detection Threshold) in individuals with diabetes mellitus type 2. Findings of the present study revealed the deteriorating effect of diabetes mellitus type 2 at the central auditory processing level.
Cholecystokinin from the entorhinal cortex enables neural plasticity in the auditory cortex
Li, Xiao; Yu, Kai; Zhang, Zicong; Sun, Wenjian; Yang, Zhou; Feng, Jingyu; Chen, Xi; Liu, Chun-Hua; Wang, Haitao; Guo, Yi Ping; He, Jufang
2014-01-01
Patients with damage to the medial temporal lobe show deficits in forming new declarative memories but can still recall older memories, suggesting that the medial temporal lobe is necessary for encoding memories in the neocortex. Here, we found that cortical projection neurons in the perirhinal and entorhinal cortices were mostly immunopositive for cholecystokinin (CCK). Local infusion of CCK in the auditory cortex of anesthetized rats induced plastic changes that enabled cortical neurons to potentiate their responses or to start responding to an auditory stimulus that was paired with a tone that robustly triggered action potentials. CCK infusion also enabled auditory neurons to start responding to a light stimulus that was paired with a noise burst. In vivo intracellular recordings in the auditory cortex showed that synaptic strength was potentiated after two pairings of presynaptic and postsynaptic activity in the presence of CCK. Infusion of a CCKB antagonist in the auditory cortex prevented the formation of a visuo-auditory association in awake rats. Finally, activation of the entorhinal cortex potentiated neuronal responses in the auditory cortex, which was suppressed by infusion of a CCKB antagonist. Together, these findings suggest that the medial temporal lobe influences neocortical plasticity via CCK-positive cortical projection neurons in the entorhinal cortex. PMID:24343575
System and algorithm for evaluation of human auditory analyzer state
NASA Astrophysics Data System (ADS)
Bachynskiy, Mykhaylo V.; Azarkhov, Oleksandr Yu.; Shtofel, Dmytro Kh.; Horbatiuk, Svitlana M.; Ławicki, Tomasz; Kalizhanova, Aliya; Smailova, Saule; Askarova, Nursanat
2017-08-01
The paper discusses questions of human auditory state evaluation with technical means. It considers the disadvantages of existing clinical audiometry methods and systems. It is proposed to use method for evaluating of auditory analyzer state by means of pulsometry to get the medical study more objective and efficient. It provides for use of two optoelectronic sensors located on the carotid artery and ear lobe, Using this method the biotechnical system for evaluation and stimulation of human auditory analyzer stare wad developed. Its hardware and software were substantiated. Different modes of simulation in the designed system were tested and the influence of the procedure on a patient was studied.
Auditory Task Irrelevance: A Basis for Inattentional Deafness
Scheer, Menja; Bülthoff, Heinrich H.; Chuang, Lewis L.
2018-01-01
Objective This study investigates the neural basis of inattentional deafness, which could result from task irrelevance in the auditory modality. Background Humans can fail to respond to auditory alarms under high workload situations. This failure, termed inattentional deafness, is often attributed to high workload in the visual modality, which reduces one’s capacity for information processing. Besides this, our capacity for processing auditory information could also be selectively diminished if there is no obvious task relevance in the auditory channel. This could be another contributing factor given the rarity of auditory warnings. Method Forty-eight participants performed a visuomotor tracking task while auditory stimuli were presented: a frequent pure tone, an infrequent pure tone, and infrequent environmental sounds. Participants were required either to respond to the presentation of the infrequent pure tone (auditory task-relevant) or not (auditory task-irrelevant). We recorded and compared the event-related potentials (ERPs) that were generated by environmental sounds, which were always task-irrelevant for both groups. These ERPs served as an index for our participants’ awareness of the task-irrelevant auditory scene. Results Manipulation of auditory task relevance influenced the brain’s response to task-irrelevant environmental sounds. Specifically, the late novelty-P3 to irrelevant environmental sounds, which underlies working memory updating, was found to be selectively enhanced by auditory task relevance independent of visuomotor workload. Conclusion Task irrelevance in the auditory modality selectively reduces our brain’s responses to unexpected and irrelevant sounds regardless of visuomotor workload. Application Presenting relevant auditory information more often could mitigate the risk of inattentional deafness. PMID:29578754
A P300 event related potential technique for assessment of sexually oriented interest.
Vardi, Yoram; Volos, Michal; Sprecher, Elliot; Granovsky, Yelena; Gruenwald, Ilan; Yarnitsky, David
2006-12-01
Despite all of the modern, sophisticated tests that exist for diagnosing and assessing male and female sexual disorders, to our knowledge there is no objective psychophysiological test to evaluate sexual arousal and interest. We provide preliminary data showing a decrease in auditory P300 wave amplitude during exposure to sexually explicit video clips and a significant correlation between the auditory P300 amplitude decrease and self-reported scores of sexual arousal and interest in the clips. A total of 30 healthy subjects were exposed to several blocks of auditory stimuli administered using an oddball paradigm. Baseline auditory P300 amplitudes were obtained and auditory stimuli were then delivered while viewing visual clips with 3 types of content, including sport, scenery and sex. Auditory P300 amplitude significantly decreased during viewing clips of all contents. Viewing sexual content clips caused a maximal decrease in P300 amplitude (p <0.0001). In addition, a high correlation was found between the amplitude decrease and scores on the sexual arousal questionnaire regarding the viewed clips (r = 0.61, p <0.001). In addition, the P300 amplitude decrease was significantly related to the sexual interest score (r = 0.37, p = 0.042) but not to interest in clips of nonsexual content. The change in auditory P300 amplitude during exposure to visual stimuli with sexual context seems to be an objective measure of subject sexual interest. This method might be applied to assess therapeutic intervention and as a diagnostic tool for assessing disorders of impaired libido or psychogenic sexual dysfunction.
Baltus, Alina; Herrmann, Christoph Siegfried
2016-06-01
Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
A dual-process account of auditory change detection.
McAnally, Ken I; Martin, Russell L; Eramudugolla, Ranmalee; Stuart, Geoffrey W; Irvine, Dexter R F; Mattingley, Jason B
2010-08-01
Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed objects with those predicted by change-detection models based on signal detection theory (SDT) and high-threshold theory (HTT). Detected changes were not identified as accurately as predicted by models based on either theory, suggesting that some changes are detected by a process that does not support change identification. Undetected changes were identified as accurately as predicted by the HTT model but much less accurately than predicted by the SDT models. The process underlying change detection was investigated further by determining receiver-operating characteristics (ROCs). ROCs did not conform to those predicted by either a SDT or a HTT model but were well modeled by a dual-process that incorporated HTT and SDT components. The dual-process model also accurately predicted the rates at which detected and undetected changes were correctly identified.
Strauss, Daniel J; Delb, Wolfgang; D'Amelio, Roberto; Low, Yin Fen; Falkai, Peter
2008-02-01
Large-scale neural correlates of the tinnitus decompensation might be used for an objective evaluation of therapies and neurofeedback based therapeutic approaches. In this study, we try to identify large-scale neural correlates of the tinnitus decompensation using wavelet phase stability criteria of single sweep sequences of late auditory evoked potentials as synchronization stability measure. The extracted measure provided an objective quantification of the tinnitus decompensation and allowed for a reliable discrimination between a group of compensated and decompensated tinnitus patients. We provide an interpretation for our results by a neural model of top-down projections based on the Jastreboff tinnitus model combined with the adaptive resonance theory which has not been applied to model tinnitus so far. Using this model, our stability measure of evoked potentials can be linked to the focus of attention on the tinnitus signal. It is concluded that the wavelet phase stability of late auditory evoked potential single sweeps might be used as objective tinnitus decompensation measure and can be interpreted in the framework of the Jastreboff tinnitus model and adaptive resonance theory.
Beetz, M Jerome; Hechavarría, Julio C; Kössl, Manfred
2016-10-27
Bats orientate in darkness by listening to echoes from their biosonar calls, a behaviour known as echolocation. Recent studies showed that cortical neurons respond in a highly selective manner when stimulated with natural echolocation sequences that contain echoes from single targets. However, it remains unknown how cortical neurons process echolocation sequences containing echo information from multiple objects. In the present study, we used echolocation sequences containing echoes from three, two or one object separated in the space depth as stimuli to study neuronal activity in the bat auditory cortex. Neuronal activity was recorded with multi-electrode arrays placed in the dorsal auditory cortex, where neurons tuned to target-distance are found. Our results show that target-distance encoding neurons are mostly selective to echoes coming from the closest object, and that the representation of echo information from distant objects is selectively suppressed. This suppression extends over a large part of the dorsal auditory cortex and may override possible parallel processing of multiple objects. The presented data suggest that global cortical suppression might establish a cortical "default mode" that allows selectively focusing on close obstacle even without active attention from the animals.
Beetz, M. Jerome; Hechavarría, Julio C.; Kössl, Manfred
2016-01-01
Bats orientate in darkness by listening to echoes from their biosonar calls, a behaviour known as echolocation. Recent studies showed that cortical neurons respond in a highly selective manner when stimulated with natural echolocation sequences that contain echoes from single targets. However, it remains unknown how cortical neurons process echolocation sequences containing echo information from multiple objects. In the present study, we used echolocation sequences containing echoes from three, two or one object separated in the space depth as stimuli to study neuronal activity in the bat auditory cortex. Neuronal activity was recorded with multi-electrode arrays placed in the dorsal auditory cortex, where neurons tuned to target-distance are found. Our results show that target-distance encoding neurons are mostly selective to echoes coming from the closest object, and that the representation of echo information from distant objects is selectively suppressed. This suppression extends over a large part of the dorsal auditory cortex and may override possible parallel processing of multiple objects. The presented data suggest that global cortical suppression might establish a cortical “default mode” that allows selectively focusing on close obstacle even without active attention from the animals. PMID:27786252
Farris, Hamilton E; Rand, A Stanley; Ryan, Michael J
2002-01-01
Numerous animals across disparate taxa must identify and locate complex acoustic signals imbedded in multiple overlapping signals and ambient noise. A requirement of this task is the ability to group sounds into auditory streams in which sounds are perceived as emanating from the same source. Although numerous studies over the past 50 years have examined aspects of auditory grouping in humans, surprisingly few assays have demonstrated auditory stream formation or the assignment of multicomponent signals to a single source in non-human animals. In our study, we present evidence for auditory grouping in female túngara frogs. In contrast to humans, in which auditory grouping may be facilitated by the cues produced when sounds arrive from the same location, we show that spatial cues play a limited role in grouping, as females group discrete components of the species' complex call over wide angular separations. Furthermore, we show that once grouped the separate call components are weighted differently in recognizing and locating the call, so called 'what' and 'where' decisions, respectively. Copyright 2002 S. Karger AG, Basel
Seghier, Mohamed L; Hope, Thomas M H; Prejawa, Susan; Parker Jones, 'Ōiwi; Vitkovitch, Melanie; Price, Cathy J
2015-03-18
The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying "1-2-3." Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying "1-2-3" and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level. Copyright © 2015 Seghier et al.
Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina
2016-02-01
Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.
Cultivating Empathy for the Mentally Ill Using Simulated Auditory Hallucinations
ERIC Educational Resources Information Center
Bunn, William; Terpstra, Jan
2009-01-01
Objective: The authors address the issue of cultivating medical students' empathy for the mentally ill by examining medical student empathy pre- and postsimulated auditory hallucination experience. Methods: At the University of Utah, 150 medical students participated in this study during their 6-week psychiatry rotation. The Jefferson Scale of…
Effects of Auditory Distraction on Cognitive Processing of Young Adults
ERIC Educational Resources Information Center
LaPointe, Leonard L.; Heald, Gary R.; Stierwalt, Julie A. G.; Kemker, Brett E.; Maurice, Trisha
2007-01-01
Objective: The effects of interference, competition, and distraction on cognitive processing are unclearly understood, particularly regarding type and intensity of auditory distraction across a variety of cognitive processing tasks. Method: The purpose of this investigation was to report two experiments that sought to explore the effects of types…
Perceptual Learning Style and Learning Proficiency: A Test of the Hypothesis
ERIC Educational Resources Information Center
Kratzig, Gregory P.; Arbuthnott, Katherine D.
2006-01-01
Given the potential importance of using modality preference with instruction, the authors tested whether learning style preference correlated with memory performance in each of 3 sensory modalities: visual, auditory, and kinesthetic. In Study 1, participants completed objective measures of pictorial, auditory, and tactile learning and learning…
Aedo, Cristian; Terreros, Gonzalo; León, Alex; Delano, Paul H.
2016-01-01
Background and Objective The auditory efferent system is a complex network of descending pathways, which mainly originate in the primary auditory cortex and are directed to several auditory subcortical nuclei. These descending pathways are connected to olivocochlear neurons, which in turn make synapses with auditory nerve neurons and outer hair cells (OHC) of the cochlea. The olivocochlear function can be studied using contralateral acoustic stimulation, which suppresses auditory nerve and cochlear responses. In the present work, we tested the proposal that the corticofugal effects that modulate the strength of the olivocochlear reflex on auditory nerve responses are produced through cholinergic synapses between medial olivocochlear (MOC) neurons and OHCs via alpha-9/10 nicotinic receptors. Methods We used wild type (WT) and alpha-9 nicotinic receptor knock-out (KO) mice, which lack cholinergic transmission between MOC neurons and OHC, to record auditory cortex evoked potentials and to evaluate the consequences of auditory cortex electrical microstimulation in the effects produced by contralateral acoustic stimulation on auditory brainstem responses (ABR). Results Auditory cortex evoked potentials at 15 kHz were similar in WT and KO mice. We found that auditory cortex microstimulation produces an enhancement of contralateral noise suppression of ABR waves I and III in WT mice but not in KO mice. On the other hand, corticofugal modulations of wave V amplitudes were significant in both genotypes. Conclusion These findings show that the corticofugal modulation of contralateral acoustic suppressions of auditory nerve (ABR wave I) and superior olivary complex (ABR wave III) responses are mediated through MOC synapses. PMID:27195498
Reduced auditory efferent activity in childhood selective mutism.
Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava
2004-06-01
Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.
Auditory brainstem response to complex sounds: a tutorial
Skoe, Erika; Kraus, Nina
2010-01-01
This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007
Zupan, Barbra; Sussman, Joan E
2009-01-01
Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.
A vestibular phenotype for Waardenburg syndrome?
NASA Technical Reports Server (NTRS)
Black, F. O.; Pesznecker, S. C.; Allen, K.; Gianna, C.
2001-01-01
OBJECTIVE: To investigate vestibular abnormalities in subjects with Waardenburg syndrome. STUDY DESIGN: Retrospective record review. SETTING: Tertiary referral neurotology clinic. SUBJECTS: Twenty-two adult white subjects with clinical diagnosis of Waardenburg syndrome (10 type I and 12 type II). INTERVENTIONS: Evaluation for Waardenburg phenotype, history of vestibular and auditory symptoms, tests of vestibular and auditory function. MAIN OUTCOME MEASURES: Results of phenotyping, results of vestibular and auditory symptom review (history), results of vestibular and auditory function testing. RESULTS: Seventeen subjects were women, and 5 were men. Their ages ranged from 21 to 58 years (mean, 38 years). Sixteen of the 22 subjects sought treatment for vertigo, dizziness, or imbalance. For subjects with vestibular symptoms, the results of vestibuloocular tests (calorics, vestibular autorotation, and/or pseudorandom rotation) were abnormal in 77%, and the results of vestibulospinal function tests (computerized dynamic posturography, EquiTest) were abnormal in 57%, but there were no specific patterns of abnormality. Six had objective sensorineural hearing loss. Thirteen had an elevated summating/action potential (>0.40) on electrocochleography. All subjects except those with severe hearing loss (n = 3) had normal auditory brainstem response results. CONCLUSION: Patients with Waardenburg syndrome may experience primarily vestibular symptoms without hearing loss. Electrocochleography and vestibular function tests appear to be the most sensitive measures of otologic abnormalities in such patients.
Recent advances in exploring the neural underpinnings of auditory scene perception
Snyder, Joel S.; Elhilali, Mounya
2017-01-01
Studies of auditory scene analysis have traditionally relied on paradigms using artificial sounds—and conventional behavioral techniques—to elucidate how we perceptually segregate auditory objects or streams from each other. In the past few decades, however, there has been growing interest in uncovering the neural underpinnings of auditory segregation using human and animal neuroscience techniques, as well as computational modeling. This largely reflects the growth in the fields of cognitive neuroscience and computational neuroscience and has led to new theories of how the auditory system segregates sounds in complex arrays. The current review focuses on neural and computational studies of auditory scene perception published in the past few years. Following the progress that has been made in these studies, we describe (1) theoretical advances in our understanding of the most well-studied aspects of auditory scene perception, namely segregation of sequential patterns of sounds and concurrently presented sounds; (2) the diversification of topics and paradigms that have been investigated; and (3) how new neuroscience techniques (including invasive neurophysiology in awake humans, genotyping, and brain stimulation) have been used in this field. PMID:28199022
Multisensory connections of monkey auditory cerebral cortex
Smiley, John F.; Falchier, Arnaud
2009-01-01
Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628
Speech Evoked Auditory Brainstem Response in Stuttering
Tahaei, Ali Akbar; Ashayeri, Hassan; Pourbakht, Akram; Kamali, Mohammad
2014-01-01
Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS) at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency. PMID:25215262
Cohen, Dale J.; Warren, Erin; Blanc-Goldhammer, Daryn
2013-01-01
The sound |faiv| is visually depicted as a written number word “five” and as an Arabic digit “5.” Here, we present four experiments – two quantity same/different experiments and two magnitude comparison experiments – that assess whether auditory number words (|faiv|), written number words (“five”), and Arabic digits (“5”) directly activate one another and/or their associated quantity. The quantity same/different experiments reveal that the auditory number words, written number words, and Arabic digits directly activate one another without activating their associated quantity. That is, there are cross-format physical similarity effects but no numerical distance effects. The cross-format magnitude comparison experiments reveal significant effects of both physical similarity and numerical distance. We discuss these results in relation to the architecture of numerical cognition. PMID:23624377
Sanju, Himanshu Kumar; Kumar, Prawin
2016-10-01
Introduction Mismatch Negativity is a negative component of the event-related potential (ERP) elicited by any discriminable changes in auditory stimulation. Objective The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN) with pair of stimuli (pure tones), using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA) showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve) in both conditions. Conclusion The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz) and gross (1000 Hz and 1100 Hz) difference in auditory stimuli at a higher level (endogenous) of the auditory system.
Epp, Bastian; Yasin, Ifat; Verhey, Jesko L
2013-12-01
The audibility of important sounds is often hampered due to the presence of other masking sounds. The present study investigates if a correlate of the audibility of a tone masked by noise is found in late auditory evoked potentials measured from human listeners. The audibility of the target sound at a fixed physical intensity is varied by introducing auditory cues of (i) interaural target signal phase disparity and (ii) coherent masker level fluctuations in different frequency regions. In agreement with previous studies, psychoacoustical experiments showed that both stimulus manipulations result in a masking release (i: binaural masking level difference; ii: comodulation masking release) compared to a condition where those cues are not present. Late auditory evoked potentials (N1, P2) were recorded for the stimuli at a constant masker level, but different signal levels within the same set of listeners who participated in the psychoacoustical experiment. The data indicate differences in N1 and P2 between stimuli with and without interaural phase disparities. However, differences for stimuli with and without coherent masker modulation were only found for P2, i.e., only P2 is sensitive to the increase in audibility, irrespective of the cue that caused the masking release. The amplitude of P2 is consistent with the psychoacoustical finding of an addition of the masking releases when both cues are present. Even though it cannot be concluded where along the auditory pathway the audibility is represented, the P2 component of auditory evoked potentials is a candidate for an objective measure of audibility in the human auditory system. Copyright © 2013 Elsevier B.V. All rights reserved.
Neural correlates of auditory recognition memory in the primate dorsal temporal pole
Ng, Chi-Wing; Plakke, Bethany
2013-01-01
Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects. PMID:24198324
Temporal lobe networks supporting the comprehension of spoken words.
Bonilha, Leonardo; Hillis, Argye E; Hickok, Gregory; den Ouden, Dirk B; Rorden, Chris; Fridriksson, Julius
2017-09-01
Auditory word comprehension is a cognitive process that involves the transformation of auditory signals into abstract concepts. Traditional lesion-based studies of stroke survivors with aphasia have suggested that neocortical regions adjacent to auditory cortex are primarily responsible for word comprehension. However, recent primary progressive aphasia and normal neurophysiological studies have challenged this concept, suggesting that the left temporal pole is crucial for word comprehension. Due to its vasculature, the temporal pole is not commonly completely lesioned in stroke survivors and this heterogeneity may have prevented its identification in lesion-based studies of auditory comprehension. We aimed to resolve this controversy using a combined voxel-based-and structural connectome-lesion symptom mapping approach, since cortical dysfunction after stroke can arise from cortical damage or from white matter disconnection. Magnetic resonance imaging (T1-weighted and diffusion tensor imaging-based structural connectome), auditory word comprehension and object recognition tests were obtained from 67 chronic left hemisphere stroke survivors. We observed that damage to the inferior temporal gyrus, to the fusiform gyrus and to a white matter network including the left posterior temporal region and its connections to the middle temporal gyrus, inferior temporal gyrus, and cingulate cortex, was associated with word comprehension difficulties after factoring out object recognition. These results suggest that the posterior lateral and inferior temporal regions are crucial for word comprehension, serving as a hub to integrate auditory and conceptual processing. Early processing linking auditory words to concepts is situated in posterior lateral temporal regions, whereas additional and deeper levels of semantic processing likely require more anterior temporal regions.10.1093/brain/awx169_video1awx169media15555638084001. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Cogné, Mélanie; Violleau, Marie-Hélène; Klinger, Evelyne; Joseph, Pierre-Alain
2018-01-31
Topographical disorientation is frequent among patients after a stroke and can be well explored with virtual environments (VEs). VEs also allow for the addition of stimuli. A previous study did not find any effect of non-contextual auditory stimuli on navigational performance in the virtual action planning-supermarket (VAP-S) simulating a medium-sized 3D supermarket. However, the perceptual or cognitive load of the sounds used was not high. We investigated how non-contextual auditory stimuli with high load affect navigational performance in the VAP-S for patients who have had a stroke and any correlation between this performance and dysexecutive disorders. Four kinds of stimuli were considered: sounds from living beings, sounds from supermarket objects, beeping sounds and names of other products that were not available in the VAP-S. The condition without auditory stimuli was the control. The Groupe de réflexion pour l'évaluation des fonctions exécutives (GREFEX) battery was used to evaluate executive functions of patients. The study included 40 patients who have had a stroke (n=22 right-hemisphere and n=18 left-hemisphere stroke). Patients' navigational performance was decreased under the 4 conditions with non-contextual auditory stimuli (P<0.05), especially for those with dysexecutive disorders. For the 5 conditions, the lower the performance, the more GREFEX tests were failed. Patients felt significantly disadvantaged by the non-contextual sounds sounds from living beings, sounds from supermarket objects and names of other products as compared with beeping sounds (P<0.01). Patients' verbal recall of the collected objects was significantly lower under the condition with names of other products (P<0.001). Left and right brain-damaged patients did not differ in navigational performance in the VAP-S under the 5 auditory conditions. These non-contextual auditory stimuli could be used in neurorehabilitation paradigms to train patients with dysexecutive disorders to inhibit disruptive stimuli. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Evaluation of Hearing in Children with Autism by Using TEOAE and ABR
ERIC Educational Resources Information Center
Tas, Abdullah; Yagiz, Recep; Tas, Memduha; Esme, Meral; Uzun, Cem; Karasalihoglu, Ahmet Rifat
2007-01-01
Assessment of auditory abilities is important in the diagnosis and treatment of children with autism. The aim was to evaluate hearing objectively by using transient evoked otoacoustic emission (TEOAE) and auditory brainstem response (ABR). Tests were performed on 30 children with autism and 15 typically developing children, following otomicroscopy…
The Effects of Auditory Information on 4-Month-Old Infants' Perception of Trajectory Continuity
ERIC Educational Resources Information Center
Bremner, J. Gavin; Slater, Alan M.; Johnson, Scott P.; Mason, Uschi C.; Spring, Jo
2012-01-01
Young infants perceive an object's trajectory as continuous across occlusion provided the temporal or spatial gap in perception is small. In 3 experiments involving 72 participants the authors investigated the effects of different forms of auditory information on 4-month-olds' perception of trajectory continuity. Provision of dynamic auditory…
Working Memory for Patterned Sequences of Auditory Objects in a Songbird
ERIC Educational Resources Information Center
Comins, Jordan A.; Gentner, Timothy Q.
2010-01-01
The capacity to remember sequences is critical to many behaviors, such as navigation and communication. Adult humans readily recall the serial order of auditory items, and this ability is commonly understood to support, in part, the speech processing for language comprehension. Theories of short-term serial recall posit either use of absolute…
Mahr, Angela; Wentura, Dirk
2014-02-01
Findings from three experiments support the conclusion that auditory primes facilitate the processing of related targets. In Experiments 1 and 2, we employed a crossmodal Stroop color identification task with auditory color words (as primes) and visual color patches (as targets). Responses were faster for congruent priming, in comparison to neutral or incongruent priming. This effect also emerged for different levels of time compression of the auditory primes (to 30 % and 10 % of the original length; i.e., 120 and 40 ms) and turned out to be even more pronounced under high-perceptual-load conditions (Exps. 1 and 2). In Experiment 3, target-present or -absent decisions for brief target displays had to be made, thereby ruling out response-priming processes as a cause of the congruency effects. Nevertheless, target detection (d') was increased by congruent primes (30 % compression) in comparison to incongruent or neutral primes. Our results suggest semantic object-based auditory-visual interactions, which rapidly increase the denoted target object's salience. This would apply, in particular, to complex visual scenes.
Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory
2017-01-01
Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom sound perception and potentially serve as an objective measure of central neural pathology. PMID:28604786
Ma, Xiaoran; McPherson, Bradley; Ma, Lian
2016-03-01
Objective Children with nonsyndromic cleft lip and/or palate often have a high prevalence of middle ear dysfunction. However, there are also indications that they may have a higher prevalence of (central) auditory processing disorder. This study used Fisher's Auditory Problems Checklist for caregivers to determine whether children with nonsyndromic cleft lip and/or palate have potentially more auditory processing difficulties compared with craniofacially normal children. Methods Caregivers of 147 school-aged children with nonsyndromic cleft lip and/or palate were recruited for the study. This group was divided into three subgroups: cleft lip, cleft palate, and cleft lip and palate. Caregivers of 60 craniofacially normal children were recruited as a control group. Hearing health tests were conducted to evaluate peripheral hearing. Caregivers of children who passed this assessment battery completed Fisher's Auditory Problems Checklist, which contains 25 questions related to behaviors linked to (central) auditory processing disorder. Results Children with cleft palate showed the lowest scores on the Fisher's Auditory Problems Checklist questionnaire, consistent with a higher index of suspicion for (central) auditory processing disorder. There was a significant difference in the manifestation of (central) auditory processing disorder-linked behaviors between the cleft palate and the control groups. The most common behaviors reported in the nonsyndromic cleft lip and/or palate group were short attention span and reduced learning motivation, along with hearing difficulties in noise. Conclusion A higher occurrence of (central) auditory processing disorder-linked behaviors were found in children with nonsyndromic cleft lip and/or palate, particularly cleft palate. Auditory processing abilities should not be ignored in children with nonsyndromic cleft lip and/or palate, and it is necessary to consider assessment tests for (central) auditory processing disorder when an auditory diagnosis is made for this population.
Superior voice recognition in a patient with acquired prosopagnosia and object agnosia.
Hoover, Adria E N; Démonet, Jean-François; Steeves, Jennifer K E
2010-11-01
Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life. Copyright © 2010 Elsevier Ltd. All rights reserved.
Irsik, Vanessa C; Vanden Bosch der Nederlanden, Christina M; Snyder, Joel S
2016-11-01
Attention and other processing constraints limit the perception of objects in complex scenes, which has been studied extensively in the visual sense. We used a change deafness paradigm to examine how attention to particular objects helps and hurts the ability to notice changes within complex auditory scenes. In a counterbalanced design, we examined how cueing attention to particular objects affected performance in an auditory change-detection task through the use of valid or invalid cues and trials without cues (Experiment 1). We further examined how successful encoding predicted change-detection performance using an object-encoding task and we addressed whether performing the object-encoding task along with the change-detection task affected performance overall (Experiment 2). Participants had more error for invalid compared to valid and uncued trials, but this effect was reduced in Experiment 2 compared to Experiment 1. When the object-encoding task was present, listeners who completed the uncued condition first had less overall error than those who completed the cued condition first. All participants showed less change deafness when they successfully encoded change-relevant compared to irrelevant objects during valid and uncued trials. However, only participants who completed the uncued condition first also showed this effect during invalid cue trials, suggesting a broader scope of attention. These findings provide converging evidence that attention to change-relevant objects is crucial for successful detection of acoustic changes and that encouraging broad attention to multiple objects is the best way to reduce change deafness. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Puffe, Lydia; Dittrich, Kerstin; Klauer, Karl Christoph
2017-01-01
In a joint go/no-go Simon task, each of two participants is to respond to one of two non-spatial stimulus features by means of a spatially lateralized response. Stimulus position varies horizontally and responses are faster and more accurate when response side and stimulus position match (compatible trial) than when they mismatch (incompatible trial), defining the social Simon effect or joint spatial compatibility effect. This effect was originally explained in terms of action/task co-representation, assuming that the co-actor's action is automatically co-represented. Recent research by Dolk, Hommel, Prinz, and Liepelt (2013) challenged this account by demonstrating joint spatial compatibility effects in a task-setting in which non-social objects like a Japanese waving cat were present, but no real co-actor. They postulated that every sufficiently salient object induces joint spatial compatibility effects. However, what makes an object sufficiently salient is so far not well defined. To scrutinize this open question, the current study manipulated auditory and/or visual attention-attracting cues of a Japanese waving cat within an auditory (Experiment 1) and a visual joint go/no-go Simon task (Experiment 2). Results revealed that joint spatial compatibility effects only occurred in an auditory Simon task when the cat provided auditory cues while no joint spatial compatibility effects were found in a visual Simon task. This demonstrates that it is not the sufficiently salient object alone that leads to joint spatial compatibility effects but instead, a complex interaction between features of the object and the stimulus material of the joint go/no-go Simon task.
Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J
2017-06-01
Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.
Conceptual Coherence Affects Phonological Activation of Context Objects during Object Naming
ERIC Educational Resources Information Center
Oppermann, Frank; Jescheniak, Jorg D.; Schriefers, Herbert
2008-01-01
In 4 picture-word interference experiments, speakers named a target object that was presented with a context object. Using auditory distractors that were phonologically related or unrelated either to the target object or the context object, the authors assessed whether phonological processing was confined to the target object or not. Phonological…
Threlkeld, Steven W; McClure, Melissa M; Rosen, Glenn D; Fitch, R Holly
2006-09-13
Induction of a focal freeze lesion to the skullcap of a 1-day-old rat pup leads to the formation of microgyria similar to those identified postmortem in human dyslexics. Rats with microgyria exhibit rapid auditory processing deficits similar to those seen in language-impaired (LI) children, and infants at risk for LI and these effects are particularly marked in juvenile as compared to adult subjects. In the current study, a startle response paradigm was used to investigate gap detection in juvenile and adult rats that received bilateral freezing lesions or sham surgery on postnatal day (P) 1, 3 or 5. Microgyria were confirmed in P1 and 3 lesion rats, but not in the P5 lesion group. We found a significant reduction in brain weight and neocortical volume in P1 and 3 lesioned brains relative to shams. Juvenile (P27-39) behavioral data indicated significant rapid auditory processing deficits in all three lesion groups as compared to sham subjects, while adult (P60+) data revealed a persistent disparity only between P1-lesioned rats and shams. Combined results suggest that generalized pathology affecting neocortical development is responsible for the presence of rapid auditory processing deficits, rather than factors specific to the formation of microgyria per se. Finally, results show that the window for the induction of rapid auditory processing deficits through disruption of neurodevelopment appears to extend beyond the endpoint for cortical neuronal migration, although, the persistent deficits exhibited by P1 lesion subjects suggest a secondary neurodevelopmental window at the time of cortical neuromigration representing a peak period of vulnerability.
The effect of postsurgical pain on attentional processing in horses.
Dodds, Louise; Knight, Laura; Allen, Kate; Murrell, Joanna
2017-07-01
To investigate the effect of postsurgical pain on the performance of horses in a novel object and auditory startle task. Prospective clinical study. Twenty horses undergoing different types of surgery and 16 control horses that did not undergo surgery. The interaction of 36 horses with novel objects and a response to an auditory stimulus were measured at two time points; the day before surgery (T1) and the day after surgery (T2) for surgical horses (G1), and at a similar time interval for control horses (G2). Pain and sedation were measured using simple descriptive scales at the time the tests were carried out. Total time or score attributed to each of the behavioural categories was compared between groups (G1 and G2) for each test and between tests (T1 and T2) for each group. The median (range) time spent interacting with novel objects was reduced in G1 from 58 (6-367) seconds in T1 to 12 (0-495) seconds in T2 (p=0.0005). In G2 the change in interaction time between T1 and T2 was not statistically significant. Median (range) total auditory score was 7 (3-12) and 10 (1-12) in G1 and G2, respectively, at T1, decreasing to 6 (0-10) in G1 after surgery and 9.5 (1-12) in G2 (p=0.0003 and p=0.94, respectively). There was a difference in total auditory score between G1 and G2 at T2 (p=0.0169), with the score being lower in G1 than G2. Postsurgical pain negatively impacts attention towards novel objects and causes a decreased responsiveness to an auditory startle test. In horses, tasks demanding attention may be useful as a biomarker of pain. Copyright © 2017 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and Analgesia. All rights reserved.
Development of Trivia Game for speech understanding in background noise.
Schwartz, Kathryn; Ringleb, Stacie I; Sandberg, Hilary; Raymer, Anastasia; Watson, Ginger S
2015-01-01
Listening in noise is an everyday activity and poses a challenge for many people. To improve the ability to understand speech in noise, a computerized auditory rehabilitation game was developed. In Trivia Game players are challenged to answer trivia questions spoken aloud. As players progress through the game, the level of background noise increases. A study using Trivia Game was conducted as a proof-of-concept investigation in healthy participants. College students with normal hearing were randomly assigned to a control (n = 13) or a treatment (n = 14) group. Treatment participants played Trivia Game 12 times over a 4-week period. All participants completed objective (auditory-only and audiovisual formats) and subjective listening in noise measures at baseline and 4 weeks later. There were no statistical differences between the groups at baseline. At post-test, the treatment group significantly improved their overall speech understanding in noise in the audiovisual condition and reported significant benefits in their functional listening abilities. Playing Trivia Game improved speech understanding in noise in healthy listeners. Significant findings for the audiovisual condition suggest that participants improved face-reading abilities. Trivia Game may be a platform for investigating changes in speech understanding in individuals with sensory, linguistic and cognitive impairments.
Auditory Cortical Processing in Real-World Listening: The Auditory System Going Real
Bizley, Jennifer; Shamma, Shihab A.; Wang, Xiaoqin
2014-01-01
The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. PMID:25392481
Auditory cortical processing in real-world listening: the auditory system going real.
Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin
2014-11-12
The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.
Artificial Induction of Sox21 Regulates Sensory Cell Formation in the Embryonic Chicken Inner Ear
Freeman, Stephen D.; Daudet, Nicolas
2012-01-01
During embryonic development, hair cells and support cells in the sensory epithelia of the inner ear derive from progenitors that express Sox2, a member of the SoxB1 family of transcription factors. Sox2 is essential for sensory specification, but high levels of Sox2 expression appear to inhibit hair cell differentiation, suggesting that factors regulating Sox2 activity could be critical for both processes. Antagonistic interactions between SoxB1 and SoxB2 factors are known to regulate cell differentiation in neural tissue, which led us to investigate the potential roles of the SoxB2 member Sox21 during chicken inner ear development. Sox21 is normally expressed by sensory progenitors within vestibular and auditory regions of the early embryonic chicken inner ear. At later stages, Sox21 is differentially expressed in the vestibular and auditory organs. Sox21 is restricted to the support cell layer of the auditory epithelium, while it is enriched in the hair cell layer of the vestibular organs. To test Sox21 function, we used two temporally distinct gain-of-function approaches. Sustained over-expression of Sox21 from early developmental stages prevented prosensory specification, and abolished the formation of both hair cells and support cells. However, later induction of Sox21 expression at the time of hair cell formation in organotypic cultures of vestibular epithelia inhibited endogenous Sox2 expression and Notch activity, and biased progenitor cells towards a hair cell fate. Interestingly, Sox21 did not promote hair cell differentiation in the immature auditory epithelium, which fits with the expression of endogenous Sox21 within mature support cells in this tissue. These results suggest that interactions among endogenous SoxB family transcription factors may regulate sensory cell formation in the inner ear, but in a context-dependent manner. PMID:23071561
Barker, Matthew D; Purdy, Suzanne C
2016-01-01
This research investigates a novel method for identifying and measuring school-aged children with poor auditory processing through a tablet computer. Feasibility and test-retest reliability are investigated by examining the percentage of Group 1 participants able to complete the tasks and developmental effects on performance. Concurrent validity was investigated against traditional tests of auditory processing using Group 2. There were 847 students aged 5 to 13 years in group 1, and 46 aged 5 to 14 years in group 2. Some tasks could not be completed by the youngest participants. Significant correlations were found between results of most auditory processing areas assessed by the Feather Squadron test and traditional auditory processing tests. Test-retest comparisons indicated good reliability for most of the Feather Squadron assessments and some of the traditional tests. The results indicate the Feather Squadron assessment is a time-efficient, feasible, concurrently valid, and reliable approach for measuring auditory processing in school-aged children. Clinically, this may be a useful option for audiologists when performing auditory processing assessments as it is a relatively fast, engaging, and easy way to assess auditory processing abilities. Research is needed to investigate further the construct validity of this new assessment by examining the association between performance on Feather Squadron and objective evoked potential, lesion studies, and/or functional imaging measures of auditory function.
Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI.
Zhou, Sijie; Allison, Brendan Z; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing
2016-01-01
Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.
Children's Auditory Working Memory Performance in Degraded Listening Conditions
ERIC Educational Resources Information Center
Osman, Homira; Sullivan, Jessica R.
2014-01-01
Purpose: The objectives of this study were to determine (a) whether school-age children with typical hearing demonstrate poorer auditory working memory performance in multitalker babble at degraded signal-to-noise ratios than in quiet; and (b) whether the amount of cognitive demand of the task contributed to differences in performance in noise. It…
ERIC Educational Resources Information Center
Steinhaus, Kurt A.
A 12-week study of two groups of 14 college freshmen music majors was conducted to determine which group demonstrated greater achievement in learning auditory discrimination using computer-assisted instruction (CAI). The method employed was a pre-/post-test experimental design using subjects randomly assigned to a control group or an experimental…
ERIC Educational Resources Information Center
Takahashi, Hidetoshi; Nakahachi, Takayuki; Stickley, Andrew; Ishitobi, Makoto; Kamio, Yoko
2018-01-01
The objective of this study was to investigate relationships between caregiver-reported sensory processing abnormalities, and the physiological index of auditory over-responsiveness evaluated using acoustic startle response measures, in children with autism spectrum disorders and typical development. Mean acoustic startle response magnitudes in…
Effect of training and level of external auditory feedback on the singing voice: volume and quality
Bottalico, Pasquale; Graetzer, Simone; Hunter, Eric J.
2015-01-01
Background Previous research suggests that classically trained professional singers rely not only on external auditory feedback but also on proprioceptive feedback associated with internal voice sensitivities. Objectives The Lombard Effect in singers and the relationship between Sound Pressure Level (SPL) and external auditory feedback was evaluated for professional and non-professional singers. Additionally, the relationship between voice quality, evaluated in terms of Singing Power Ratio (SPR), and external auditory feedback, level of accompaniment, voice register and singer gender was analyzed. Methods The subjects were 10 amateur or beginner singers, and 10 classically-trained professional or semi-professional singers (10 males and 10 females). Subjects sang an excerpt from the Star-spangled Banner with three different levels of the accompaniment, 70, 80 and 90 dBA, and with three different levels of external auditory feedback. SPL and the SPR were analyzed. Results The Lombard Effect was stronger for non-professional singers than professional singers. Higher levels of external auditory feedback were associated with a reduction in SPL. As predicted, the mean SPR was higher for professional than non-professional singers. Better voice quality was detected in the presence of higher levels of external auditory feedback. Conclusions With an increase in training, the singer’s reliance on external auditory feedback decreases. PMID:26186810
Happel, Max F. K.; Ohl, Frank W.
2017-01-01
Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062
Relationship between Auditory and Cognitive Abilities in Older Adults
Sheft, Stanley
2015-01-01
Objective The objective was to evaluate the association of peripheral and central hearing abilities with cognitive function in older adults. Methods Recruited from epidemiological studies of aging and cognition at the Rush Alzheimer’s Disease Center, participants were a community-dwelling cohort of older adults (range 63–98 years) without diagnosis of dementia. The cohort contained roughly equal numbers of Black (n=61) and White (n=63) subjects with groups similar in terms of age, gender, and years of education. Auditory abilities were measured with pure-tone audiometry, speech-in-noise perception, and discrimination thresholds for both static and dynamic spectral patterns. Cognitive performance was evaluated with a 12-test battery assessing episodic, semantic, and working memory, perceptual speed, and visuospatial abilities. Results Among the auditory measures, only the static and dynamic spectral-pattern discrimination thresholds were associated with cognitive performance in a regression model that included the demographic covariates race, age, gender, and years of education. Subsequent analysis indicated substantial shared variance among the covariates race and both measures of spectral-pattern discrimination in accounting for cognitive performance. Among cognitive measures, working memory and visuospatial abilities showed the strongest interrelationship to spectral-pattern discrimination performance. Conclusions For a cohort of older adults without diagnosis of dementia, neither hearing thresholds nor speech-in-noise ability showed significant association with a summary measure of global cognition. In contrast, the two auditory metrics of spectral-pattern discrimination ability significantly contributed to a regression model prediction of cognitive performance, demonstrating association of central auditory ability to cognitive status using auditory metrics that avoided the confounding effect of speech materials. PMID:26237423
Auditory salience using natural soundscapes.
Huang, Nicholas; Elhilali, Mounya
2017-03-01
Salience describes the phenomenon by which an object stands out from a scene. While its underlying processes are extensively studied in vision, mechanisms of auditory salience remain largely unknown. Previous studies have used well-controlled auditory scenes to shed light on some of the acoustic attributes that drive the salience of sound events. Unfortunately, the use of constrained stimuli in addition to a lack of well-established benchmarks of salience judgments hampers the development of comprehensive theories of sensory-driven auditory attention. The present study explores auditory salience in a set of dynamic natural scenes. A behavioral measure of salience is collected by having human volunteers listen to two concurrent scenes and indicate continuously which one attracts their attention. By using natural scenes, the study takes a data-driven rather than experimenter-driven approach to exploring the parameters of auditory salience. The findings indicate that the space of auditory salience is multidimensional (spanning loudness, pitch, spectral shape, as well as other acoustic attributes), nonlinear and highly context-dependent. Importantly, the results indicate that contextual information about the entire scene over both short and long scales needs to be considered in order to properly account for perceptual judgments of salience.
A Corticothalamic Circuit Model for Sound Identification in Complex Scenes
Otazu, Gonzalo H.; Leibold, Christian
2011-01-01
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668
Faria, Rodolfo Souza; Gutierres, Luís Felipe Soares; Sobrinho, Fernando César Faria; Miranda, Iris do Vale; Reis, Júlia Dos; Dias, Elayne Vieira; Sartori, Cesar Renato; Moreira, Dalmo Antonio Ribeiro
2016-08-15
Exposure to negative environmental events triggers defensive behavior and leads to the formation of aversive associative memory. Cellular and molecular changes in the central nervous system underlie this memory formation, as well as the associated behavioral changes. In general, memory process is established in distinct phases such as acquisition, consolidation, evocation, persistence, and extinction of the acquired information. After exposure to a particular event, early changes in involved neural circuits support the memory consolidation, which corresponds to the short-term memory. Re-exposure to previously memorized events evokes the original memory, a process that is considered essential for the reactivation and consequent persistence of memory, ensuring that long-term memory is established. Different environmental stimuli may modulate the memory formation process, as well as their distinct phases. Among the different environmental stimuli able of modulating memory formation is the physical exercise which is a potent modulator of neuronal activity. There are many studies showing that physical exercise modulates learning and memory processes, mainly in the consolidation phase of the explicit memory. However, there are few reports in the literature regarding the role of physical exercise in implicit aversive associative memory, especially at the persistence phase. Thus, the present study aimed to investigate the relationship between swimming exercise and the consolidation and persistence of contextual and auditory-cued fear memory. Male Wistar rats were submitted to sessions of swimming exercise five times a week, over six weeks. After that, the rats were submitted to classical aversive conditioning training by a pairing tone/foot shock paradigm. Finally, rats were evaluated for consolidation and persistence of fear memory to both auditory and contextual cues. Our results demonstrate that classical aversive conditioning with tone/foot shock pairing induced consolidation as well as persistence of conditioned fear memory. In addition, rats submitted to swimming exercise over six weeks showed an improved performance in the test of auditory-cued fear memory persistence, but not in the test of contextual fear memory persistence. Moreover, no significant effect from swimming exercise was observed on consolidation of both contextual and auditory fear memory. So, our study, revealing the effect of the swimming exercise on different stages of implicit memory of tone/foot shock conditioning, contributes to and complements the current knowledge about the environmental modulation of memory process. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Concept Formation Skills in Long-Term Cochlear Implant Users
Castellanos, Irina; Kronenberger, William G.; Beer, Jessica; Colson, Bethany G.; Henning, Shirley C.; Ditmars, Allison; Pisoni, David B.
2015-01-01
This study investigated if a period of auditory sensory deprivation followed by degraded auditory input and related language delays affects visual concept formation skills in long-term prelingually deaf cochlear implant (CI) users. We also examined if concept formation skills are mediated or moderated by other neurocognitive domains (i.e., language, working memory, and executive control). Relative to normally hearing (NH) peers, CI users displayed significantly poorer performance in several specific areas of concept formation, especially when multiple comparisons and relational concepts were components of the task. Differences in concept formation between CI users and NH peers were fully explained by differences in language and inhibition–concentration skills. Language skills were also found to be more strongly related to concept formation in CI users than in NH peers. The present findings suggest that complex relational concepts may be adversely affected by a period of early prelingual deafness followed by access to underspecified and degraded sound patterns and spoken language transmitted by a CI. Investigating a unique clinical population such as early-implanted prelingually deaf children with CIs can provide new insights into foundational brain–behavior relations and developmental processes. PMID:25583706
Translational control of auditory imprinting and structural plasticity by eIF2α.
Batista, Gervasio; Johnson, Jennifer Leigh; Dominguez, Elena; Costa-Mattioli, Mauro; Pena, Jose L
2016-12-23
The formation of imprinted memories during a critical period is crucial for vital behaviors, including filial attachment. Yet, little is known about the underlying molecular mechanisms. Using a combination of behavior, pharmacology, in vivo surface sensing of translation (SUnSET) and DiOlistic labeling we found that, translational control by the eukaryotic translation initiation factor 2 alpha (eIF2α) bidirectionally regulates auditory but not visual imprinting and related changes in structural plasticity in chickens. Increasing phosphorylation of eIF2α (p-eIF2α) reduces translation rates and spine plasticity, and selectively impairs auditory imprinting. By contrast, inhibition of an eIF2α kinase or blocking the translational program controlled by p-eIF2α enhances auditory imprinting. Importantly, these manipulations are able to reopen the critical period. Thus, we have identified a translational control mechanism that selectively underlies auditory imprinting. Restoring translational control of eIF2α holds the promise to rejuvenate adult brain plasticity and restore learning and memory in a variety of cognitive disorders.
Palmiero, Massimiliano; Di Matteo, Rosalia; Belardinelli, Marta Olivetti
2014-05-01
Two experiments comparing imaginative processing in different modalities and semantic processing were carried out to investigate the issue of whether conceptual knowledge can be represented in different format. Participants were asked to judge the similarity between visual images, auditory images, and olfactory images in the imaginative block, if two items belonged to the same category in the semantic block. Items were verbally cued in both experiments. The degree of similarity between the imaginative and semantic items was changed across experiments. Experiment 1 showed that the semantic processing was faster than the visual and the auditory imaginative processing, whereas no differentiation was possible between the semantic processing and the olfactory imaginative processing. Experiment 2 revealed that only the visual imaginative processing could be differentiated from the semantic processing in terms of accuracy. These results showed that the visual and auditory imaginative processing can be differentiated from the semantic processing, although both visual and auditory images strongly rely on semantic representations. On the contrary, no differentiation is possible within the olfactory domain. Results are discussed in the frame of the imagery debate.
Leitmeyer, Katharina; Glutz, Andrea; Radojevic, Vesna; Setz, Cristian; Huerzeler, Nathan; Bumann, Helen; Bodmer, Daniel; Brand, Yves
2015-01-01
Rapamycin is an antifungal agent with immunosuppressive properties. Rapamycin inhibits the mammalian target of rapamycin (mTOR) by blocking the mTOR complex 1 (mTORC1). mTOR is an atypical serine/threonine protein kinase, which controls cell growth, cell proliferation, and cell metabolism. However, less is known about the mTOR pathway in the inner ear. First, we evaluated whether or not the two mTOR complexes (mTORC1 and mTORC2, resp.) are present in the mammalian cochlea. Next, tissue explants of 5-day-old rats were treated with increasing concentrations of rapamycin to explore the effects of rapamycin on auditory hair cells and spiral ganglion neurons. Auditory hair cell survival, spiral ganglion neuron number, length of neurites, and neuronal survival were analyzed in vitro. Our data indicates that both mTOR complexes are expressed in the mammalian cochlea. We observed that inhibition of mTOR by rapamycin results in a dose dependent damage of auditory hair cells. Moreover, spiral ganglion neurite number and length of neurites were significantly decreased in all concentrations used compared to control in a dose dependent manner. Our data indicate that the mTOR may play a role in the survival of hair cells and modulates spiral ganglion neuronal outgrowth and neurite formation. PMID:25918725
Leitmeyer, Katharina; Glutz, Andrea; Radojevic, Vesna; Setz, Cristian; Huerzeler, Nathan; Bumann, Helen; Bodmer, Daniel; Brand, Yves
2015-01-01
Rapamycin is an antifungal agent with immunosuppressive properties. Rapamycin inhibits the mammalian target of rapamycin (mTOR) by blocking the mTOR complex 1 (mTORC1). mTOR is an atypical serine/threonine protein kinase, which controls cell growth, cell proliferation, and cell metabolism. However, less is known about the mTOR pathway in the inner ear. First, we evaluated whether or not the two mTOR complexes (mTORC1 and mTORC2, resp.) are present in the mammalian cochlea. Next, tissue explants of 5-day-old rats were treated with increasing concentrations of rapamycin to explore the effects of rapamycin on auditory hair cells and spiral ganglion neurons. Auditory hair cell survival, spiral ganglion neuron number, length of neurites, and neuronal survival were analyzed in vitro. Our data indicates that both mTOR complexes are expressed in the mammalian cochlea. We observed that inhibition of mTOR by rapamycin results in a dose dependent damage of auditory hair cells. Moreover, spiral ganglion neurite number and length of neurites were significantly decreased in all concentrations used compared to control in a dose dependent manner. Our data indicate that the mTOR may play a role in the survival of hair cells and modulates spiral ganglion neuronal outgrowth and neurite formation.
Tomaszycki, Michelle L; Atchley, Derek
2017-10-01
Social relationships are complex, involving the production and comprehension of signals, individual recognition, and close coordination of behavior between two or more individuals. The nonapeptides oxytocin and vasopressin are widely believed to regulate social relationships. These findings come largely from prairie voles, in which nonapeptide receptors in olfactory neural circuits drive pair bonding. This research is assumed to apply to all species. Previous reviews have offered two competing hypotheses. The work of Sarah Newman has implicated a common neural network across species, the Social Behavior Network. In contrast, others have suggested that there are signal modality-specific networks that regulate social behavior. Our research focuses on evaluating these two competing hypotheses in the zebra finch, a species that relies heavily on vocal/auditory signals for communication, specifically the neural circuits underlying singing in males and song perception in females. We have demonstrated that the quality of vocal interactions is highly important for the formation of long-term monogamous bonds in zebra finches. Qualitative evidence at first suggests that nonapeptide receptor distributions are very different between monogamous rodents (olfactory species) and monogamous birds (vocal/auditory species). However, we have demonstrated that social bonding behaviors are not only correlated with activation of nonapeptide receptors in vocal and auditory circuits, but also involve regions of the common Social Behavior Network. Here, we show increased Vasopressin 1a receptor, but not oxytocin receptor, activation in two auditory regions following formation of a pair bond. To our knowledge, this is the first study to suggest a role of nonapeptides in the auditory circuit in pair bonding. Thus, we highlight converging mechanisms of social relationships and also point to the importance of studying multiple species to understand mechanisms of behavior. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
Representing Knowledge: Assessment of Creativity in Humanities
ERIC Educational Resources Information Center
Zemits, Birut Irena
2017-01-01
Traditionally, assessment for university students in the humanities has been in an essay format, but this has changed extensively in the last decade. Assessments now may entail auditory and visual presentations, films, mind-maps, and other modes of communication. These formats are outside the established conventions of humanities and may be…
Sullivan, Jessica R; Osman, Homira; Schafer, Erin C
2015-06-01
The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Children with normal hearing between the ages of 8 and 10 years were administered working memory and comprehension tasks in quiet and noise. The comprehension measure comprised 5 domains: main idea, details, reasoning, vocabulary, and understanding messages. Performance on auditory working memory and comprehension tasks were significantly poorer in noise than in quiet. The reasoning, details, understanding, and vocabulary subtests were particularly affected in noise (p < .05). The relationship between auditory working memory and comprehension was stronger in noise than in quiet, suggesting an increased contribution of working memory. These data suggest that school-age children's auditory working memory and comprehension are negatively affected by noise. Performance on comprehension tasks in noise is strongly related to demands placed on working memory, supporting the theory that degrading listening conditions draws resources away from the primary task.
Judging hardness of an object from the sounds of tapping created by a white cane.
Nunokawa, K; Seki, Y; Ino, S; Doi, K
2014-01-01
The white cane plays a vital role in the independent mobility support of the visually impaired. Allowing the recognition of target attributes through the contact of a white cane is an important function. We have conducted research to obtain fundamental knowledge concerning the exploration methods used to perceive the hardness of an object through contact with a white cane. This research has allowed us to examine methods that enhance accuracy in the perception of objects as well as the materials and structures of a white cane. Previous research suggest considering the roles of both auditory and tactile information from the white cane in determining objects' hardness is necessary. This experimental study examined the ability of people to perceive the hardness of an object solely through the tapping sounds of a white cane (i.e., auditory information) using a method of magnitude estimation. Two types of sounds were used to estimate hardness: 1) the playback of recorded tapping sounds and 2) the sounds produced on-site by tapping. Three types of handgrips were used to create different sounds of tapping on an object with a cane. The participants of this experiment were five sighted university students wearing eye masks and two totally blind students who walk independently with a white cane. The results showed that both sighted university students and totally blind participants were able to accurately judge the hardness of an object solely by using auditory information from a white cane. For the blind participants, different handgrips significantly influenced the accuracy of their estimation of an object's hardness.
Controlling the perceived distance of an auditory object by manipulation of loudspeaker directivity.
Laitinen, Mikko-Ville; Politis, Archontis; Huhtakallio, Ilkka; Pulkki, Ville
2015-06-01
This work presents a method to control the perceived distance of an auditory object by changing the directivity pattern of a loudspeaker and consequently the direct-to-reverberant ratio at the listening spot. Control of the directivity pattern is achieved by beamforming using a compact multi-driver loudspeaker unit. A small-sized cubic array consisting of six drivers is assembled, and per driver beamforming filters are derived from directional measurements of the array. The proposed method is evaluated using formal listening tests. The results show that the perceived distance can be controlled effectively by directivity pattern modification.
ERIC Educational Resources Information Center
Behrmann, Polly; Millman, Joan
The activities collected in this handbook are planned for parents to use with their children in a learning experience. They can also be used in the classroom. Sections contain games designed to develop visual discrimination, auditory discrimination, motor coordination and oral expression. An objective is given for each game, and directions for…
ERIC Educational Resources Information Center
Murray, Hugh
Proposed is a study to evaluate the auditory systems of learning disabled (LD) students with a new audiological, diagnostic, stimulus apparatus which is capable of objectively measuring the interaction of the binaural aspects of hearing. The author points out problems with LD definitions that exclude neurological disorders. The detection of…
Children Use Object-Level Category Knowledge to Detect Changes in Complex Auditory Scenes
ERIC Educational Resources Information Center
Vanden Bosch der Nederlanden, Christina M.; Snyder, Joel S.; Hannon, Erin E.
2016-01-01
Children interact with and learn about all types of sound sources, including dogs, bells, trains, and human beings. Although it is clear that knowledge of semantic categories for everyday sights and sounds develops during childhood, there are very few studies examining how children use this knowledge to make sense of auditory scenes. We used a…
Romero, Ana Carla Leite; Alfaya, Lívia Marangoni; Gonçales, Alina Sanches; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima
2016-01-01
Introduction The auditory system of HIV-positive children may have deficits at various levels, such as the high incidence of problems in the middle ear that can cause hearing loss. Objective The objective of this study is to characterize the development of children infected by the Human Immunodeficiency Virus (HIV) in the Simplified Auditory Processing Test (SAPT) and the Staggered Spondaic Word Test. Methods We performed behavioral tests composed of the Simplified Auditory Processing Test and the Portuguese version of the Staggered Spondaic Word Test (SSW). The participants were 15 children infected by HIV, all using antiretroviral medication. Results The children had abnormal auditory processing verified by Simplified Auditory Processing Test and the Portuguese version of SSW. In the Simplified Auditory Processing Test, 60% of the children presented hearing impairment. In the SAPT, the memory test for verbal sounds showed more errors (53.33%); whereas in SSW, 86.67% of the children showed deficiencies indicating deficit in figure-ground, attention, and memory auditory skills. Furthermore, there are more errors in conditions of background noise in both age groups, where most errors were in the left ear in the Group of 8-year-olds, with similar results for the group aged 9 years. Conclusion The high incidence of hearing loss in children with HIV and comorbidity with several biological and environmental factors indicate the need for: 1) familiar and professional awareness of the impact on auditory alteration on the developing and learning of the children with HIV, and 2) access to educational plans and follow-up with multidisciplinary teams as early as possible to minimize the damage caused by auditory deficits. PMID:28050213
Liu, Ke; Ji, Fei; Yang, Guan; Hou, Zhaohui; Sun, Jianhe; Wang, Xiaoyu; Guo, Weiwei; Sun, Wei; Yang, Weiyan; Yang, Xiao; Yang, Shiming
2016-10-01
More than 100 genes have been associated with deafness. However, SMAD4 is rarely considered a contributor to deafness in humans, except for its well-defined role in cell differentiation and regeneration. Here, we report that a SMAD4 defect in mice can cause auditory neuropathy, which was defined as a mysterious hearing and speech perception disorder in human for which the genetic background remains unclear. Our study showed that a SMAD4 defect induces failed formation of cochlear ribbon synapse during the earlier stage of auditory development in mice. Further investigation found that there are nearly normal morphology of outer hair cells (OHCs) and post-synapse spiral ganglion nerves (SGNs) in SMAD4 conditional knockout mice (cKO); however, a preserved distortion product of otoacoustic emission (DPOAE) and cochlear microphonic (CM) still can be evoked in cKO mice. Moreover, a partial restoration of hearing detected by electric auditory brainstem response (eABR) has been obtained in the cKO mice using electrode stimuli toward auditory nerves. Additionally, the ribbon synapses in retina are not affected by this SMAD4 defect. Thus, our findings suggest that this SMAD4 defect causes auditory neuropathy via specialized disruption of cochlear ribbon synapses.
Auditory Scene Analysis: An Attention Perspective
2017-01-01
Purpose This review article provides a new perspective on the role of attention in auditory scene analysis. Method A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception—from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity. Conclusions A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601618 PMID:29049599
Sensitivity and specificity of auditory steady‐state response testing
Rabelo, Camila Maia; Schochat, Eliane
2011-01-01
INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady‐state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady‐state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz). The difference between auditory steady‐state response‐estimated thresholds and behavioral thresholds (audiometric evaluation) was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady‐state response‐estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS) lesions has shown that individuals with CANS lesions present a greater difference between ASSR‐estimated thresholds and actual behavioral thresholds; ASSR‐estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR‐estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady‐state response testing was lower than its overall specificity. Although the overall specificity was high, it was lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. Overall sensitivity was also lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. PMID:21437442
The effect of written text on comprehension of spoken English as a foreign language.
Diao, Yali; Chandler, Paul; Sweller, John
2007-01-01
Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.
Spoken Word Recognition in Toddlers Who Use Cochlear Implants
Grieco-Calub, Tina M.; Saffran, Jenny R.; Litovsky, Ruth Y.
2010-01-01
Purpose The purpose of this study was to assess the time course of spoken word recognition in 2-year-old children who use cochlear implants (CIs) in quiet and in the presence of speech competitors. Method Children who use CIs and age-matched peers with normal acoustic hearing listened to familiar auditory labels, in quiet or in the presence of speech competitors, while their eye movements to target objects were digitally recorded. Word recognition performance was quantified by measuring each child’s reaction time (i.e., the latency between the spoken auditory label and the first look at the target object) and accuracy (i.e., the amount of time that children looked at target objects within 367 ms to 2,000 ms after the label onset). Results Children with CIs were less accurate and took longer to fixate target objects than did age-matched children without hearing loss. Both groups of children showed reduced performance in the presence of the speech competitors, although many children continued to recognize labels at above-chance levels. Conclusion The results suggest that the unique auditory experience of young CI users slows the time course of spoken word recognition abilities. In addition, real-world listening environments may slow language processing in young language learners, regardless of their hearing status. PMID:19951921
Figueiredo, Carolina Calsolari; de Andrade, Adriana Neves; Marangoni-Castan, Andréa Tortosa; Gil, Daniela; Suriano, Italo Capraro
2015-01-01
ABSTRACT Objective To investigate the long-term efficacy of acoustically controlled auditory training in adults after tarumatic brain injury. Methods A total of six audioogically normal individuals aged between 20 and 37 years were studied. They suffered severe traumatic brain injury with diffuse axional lesion and underwent an acoustically controlled auditory training program approximately one year before. The results obtained in the behavioral and electrophysiological evaluation of auditory processing immediately after acoustically controlled auditory training were compared to reassessment findings, one year later. Results Quantitative analysis of auditory brainsteim response showed increased absolute latency of all waves and interpeak intervals, bilaterraly, when comparing both evaluations. Moreover, increased amplitude of all waves, and the wave V amplitude was statistically significant for the right ear, and wave III for the left ear. As to P3, decreased latency and increased amplitude were found for both ears in reassessment. The previous and current behavioral assessment showed similar results, except for the staggered spondaic words in the left ear and the amount of errors on the dichotic consonant-vowel test. Conclusion The acoustically controlled auditory training was effective in the long run, since better latency and amplitude results were observed in the electrophysiological evaluation, in addition to stability of behavioral measures after one-year training. PMID:26676270
Using Neuroplasticity-Based Auditory Training to Improve Verbal Memory in Schizophrenia
Fisher, Melissa; Holland, Christine; Merzenich, Michael M.; Vinogradov, Sophia
2009-01-01
Objective Impaired verbal memory in schizophrenia is a key rate-limiting factor for functional outcome, does not respond to currently available medications, and shows only modest improvement after conventional behavioral remediation. The authors investigated an innovative approach to the remediation of verbal memory in schizophrenia, based on principles derived from the basic neuroscience of learning-induced neuroplasticity. The authors report interim findings in this ongoing study. Method Fifty-five clinically stable schizophrenia subjects were randomly assigned to either 50 hours of computerized auditory training or a control condition using computer games. Those receiving auditory training engaged in daily computerized exercises that placed implicit, increasing demands on auditory perception through progressively more difficult auditory-verbal working memory and verbal learning tasks. Results Relative to the control group, subjects who received active training showed significant gains in global cognition, verbal working memory, and verbal learning and memory. They also showed reliable and significant improvement in auditory psychophysical performance; this improvement was significantly correlated with gains in verbal working memory and global cognition. Conclusions Intensive training in early auditory processes and auditory-verbal learning results in substantial gains in verbal cognitive processes relevant to psychosocial functioning in schizophrenia. These gains may be due to a training method that addresses the early perceptual impairments in the illness, that exploits intact mechanisms of repetitive practice in schizophrenia, and that uses an intensive, adaptive training approach. PMID:19448187
Connecting the ear to the brain: molecular mechanisms of auditory circuit assembly
Appler, Jessica M.; Goodrich, Lisa V.
2011-01-01
Our sense of hearing depends on precisely organized circuits that allow us to sense, perceive, and respond to complex sounds in our environment, from music and language to simple warning signals. Auditory processing begins in the cochlea of the inner ear, where sounds are detected by sensory hair cells and then transmitted to the central nervous system by spiral ganglion neurons, which faithfully preserve the frequency, intensity, and timing of each stimulus. During the assembly of auditory circuits, spiral ganglion neurons establish precise connections that link hair cells in the cochlea to target neurons in the auditory brainstem, develop specific firing properties, and elaborate unusual synapses both in the periphery and in the CNS. Understanding how spiral ganglion neurons acquire these unique properties is a key goal in auditory neuroscience, as these neurons represent the sole input of auditory information to the brain. In addition, the best currently available treatment for many forms of deafness is the cochlear implant, which compensates for lost hair cell function by directly stimulating the auditory nerve. Historically, studies of the auditory system have lagged behind other sensory systems due to the small size and inaccessibility of the inner ear. With the advent of new molecular genetic tools, this gap is narrowing. Here, we summarize recent insights into the cellular and molecular cues that guide the development of spiral ganglion neurons, from their origin in the proneurosensory domain of the otic vesicle to the formation of specialized synapses that ensure rapid and reliable transmission of sound information from the ear to the brain. PMID:21232575
The audiovisual structure of onomatopoeias: An intrusion of real-world physics in lexical creation.
Taitz, Alan; Assaneo, M Florencia; Elisei, Natalia; Trípodi, Mónica; Cohen, Laurent; Sitt, Jacobo D; Trevisan, Marcos A
2018-01-01
Sound-symbolic word classes are found in different cultures and languages worldwide. These words are continuously produced to code complex information about events. Here we explore the capacity of creative language to transport complex multisensory information in a controlled experiment, where our participants improvised onomatopoeias from noisy moving objects in audio, visual and audiovisual formats. We found that consonants communicate movement types (slide, hit or ring) mainly through the manner of articulation in the vocal tract. Vowels communicate shapes in visual stimuli (spiky or rounded) and sound frequencies in auditory stimuli through the configuration of the lips and tongue. A machine learning model was trained to classify movement types and used to validate generalizations of our results across formats. We implemented the classifier with a list of cross-linguistic onomatopoeias simple actions were correctly classified, while different aspects were selected to build onomatopoeias of complex actions. These results show how the different aspects of complex sensory information are coded and how they interact in the creation of novel onomatopoeias.
Auditory Speech Perception Development in Relation to Patient's Age with Cochlear Implant
Ciscare, Grace Kelly Seixas; Mantello, Erika Barioni; Fortunato-Queiroz, Carla Aparecida Urzedo; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa dos
2017-01-01
Introduction A cochlear implant in adolescent patients with pre-lingual deafness is still a debatable issue. Objective The objective of this study is to analyze and compare the development of auditory speech perception in children with pre-lingual auditory impairment submitted to cochlear implant, in different age groups in the first year after implantation. Method This is a retrospective study, documentary research, in which we analyzed 78 reports of children with severe bilateral sensorineural hearing loss, unilateral cochlear implant users of both sexes. They were divided into three groups: G1, 22 infants aged less than 42 months; G2, 28 infants aged between 43 to 83 months; and G3, 28 older than 84 months. We collected medical record data to characterize the patients, auditory thresholds with cochlear implants, assessment of speech perception, and auditory skills. Results There was no statistical difference in the association of the results among groups G1, G2, and G3 with sex, caregiver education level, city of residence, and speech perception level. There was a moderate correlation between age and hearing aid use time, age and cochlear implants use time. There was a strong correlation between age and the age cochlear implants was performed, hearing aid use time and age CI was performed. Conclusion There was no statistical difference in the speech perception in relation to the patient's age when cochlear implant was performed. There were statistically significant differences for the variables of auditory deprivation time between G3 - G1 and G2 - G1 and hearing aid use time between G3 - G2 and G3 - G1. PMID:28680487
Wahn, Basil; König, Peter
2015-01-01
Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.
Using an auditory sensory substitution device to augment vision: evidence from eye movements.
Wright, Thomas D; Margolis, Aaron; Ward, Jamie
2015-03-01
Sensory substitution devices convert information normally associated with one sense into another sense (e.g. converting vision into sound). This is often done to compensate for an impaired sense. The present research uses a multimodal approach in which both natural vision and sound-from-vision ('soundscapes') are simultaneously presented. Although there is a systematic correspondence between what is seen and what is heard, we introduce a local discrepancy between the signals (the presence of a target object that is heard but not seen) that the participant is required to locate. In addition to behavioural responses, the participants' gaze is monitored with eye-tracking. Although the target object is only presented in the auditory channel, behavioural performance is enhanced when visual information relating to the non-target background is presented. In this instance, vision may be used to generate predictions about the soundscape that enhances the ability to detect the hidden auditory object. The eye-tracking data reveal that participants look for longer in the quadrant containing the auditory target even when they subsequently judge it to be located elsewhere. As such, eye movements generated by soundscapes reveal the knowledge of the target location that does not necessarily correspond to the actual judgment made. The results provide a proof of principle that multimodal sensory substitution may be of benefit to visually impaired people with some residual vision and, in normally sighted participants, for guiding search within complex scenes.
Depressive and Anxiety Symptoms in Older Adults With Auditory, Vision, and Dual Sensory Impairment.
Simning, Adam; Fox, Meghan L; Barnett, Steven L; Sorensen, Silvia; Conwell, Yeates
2018-06-01
The objective of the study is to examine the association of auditory, vision, and dual sensory impairment with late-life depressive and anxiety symptoms. Our study included 7,507 older adults from the National Health & Aging Trends Study, a nationally representative sample of U.S. Medicare beneficiaries. Auditory and vision impairment were determined by self-report, and depressive and anxiety symptoms were evaluated by the two-item Patient Health Questionnaire (PHQ-2) and two-item Generalized Anxiety Disorder Scale (GAD-2), respectively. Auditory, vision, and dual impairment were associated with an increased risk of depressive and anxiety symptoms in multivariable analyses accounting for sociodemographics, medical comorbidity, and functional impairment. Auditory, vision, and dual impairment were also associated with an increased risk for depressive and anxiety symptoms that persist or were of new onset after 1 year. Screening older adults with sensory impairments for depression and anxiety, and screening those with late-life depression and anxiety for sensory impairments, may identify treatment opportunities to optimize health and well-being.
Sugihara, Tadashi; Diltz, Mark D; Averbeck, Bruno B; Romanski, Lizabeth M
2006-10-25
The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O'Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication.
Sugihara, Tadashi; Diltz, Mark D.; Averbeck, Bruno B.; Romanski, Lizabeth M.
2009-01-01
The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O’Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication. PMID:17065454
Meyerhoff, Hauke S; Huff, Markus
2016-04-01
Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.
Auditory Confrontation Naming in Alzheimer’s Disease
Brandt, Jason; Bakker, Arnold; Maroof, David Aaron
2010-01-01
Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer’s disease (AD). We developed an Auditory Naming Task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies, mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the Auditory Naming Task. This task was also more difficult than two versions of a comparable Visual Naming Task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal subjects. Nonetheless, our Auditory Naming Test may prove useful in research and clinical practice, especially with visually-impaired patients. PMID:20981630
Barcroft, Joe; Sommers, Mitchell S; Tye-Murray, Nancy; Mauzé, Elizabeth; Schroy, Catherine; Spehar, Brent
2011-11-01
Our long-term objective is to develop an auditory training program that will enhance speech recognition in those situations where patients most want improvement. As a first step, the current investigation trained participants using either a single talker or multiple talkers to determine if auditory training leads to transfer-appropriate gains. The experiment implemented a 2 × 2 × 2 mixed design, with training condition as a between-participants variable and testing interval and test version as repeated-measures variables. Participants completed a computerized six-week auditory training program wherein they heard either the speech of a single talker or the speech of six talkers. Training gains were assessed with single-talker and multi-talker versions of the Four-choice discrimination test. Participants in both groups were tested on both versions. Sixty-nine adult hearing-aid users were randomly assigned to either single-talker or multi-talker auditory training. Both groups showed significant gains on both test versions. Participants who trained with multiple talkers showed greater improvement on the multi-talker version whereas participants who trained with a single talker showed greater improvement on the single-talker version. Transfer-appropriate gains occurred following auditory training, suggesting that auditory training can be designed to target specific patient needs.
Cardon, Garrett; Sharma, Anu
2013-01-01
Objective We examined cortical auditory development and behavioral outcomes in children with ANSD fitted with cochlear implants (CI). Design Cortical maturation, measured by P1 cortical auditory evoked potential (CAEP) latency, was regressed against scores on the Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS). Implantation age was also considered in relation to CAEP findings. Study Sample Cross-sectional and longitudinal samples of 24 and 11 children, respectively, with ANSD fitted with CIs. Result P1 CAEP responses were present in all children after implantation, though previous findings suggest that only 50-75% of ANSD children with hearing aids show CAEP responses. P1 CAEP latency was significantly correlated with participants' IT-MAIS scores. Furthermore, more children implanted before age two years showed normal P1 latencies, while those implanted later mainly showed delayed latencies. Longitudinal analysis revealed that most children showed normal or improved cortical maturation after implantation. Conclusion Cochlear implantation resulted in measureable cortical auditory development for all children with ANSD. Children fitted with CIs under age two years were more likely to show age-appropriate CAEP responses within 6 months after implantation, suggesting a possible sensitive period for cortical auditory development in ANSD. That CAEP responses were correlated with behavioral outcome highlights their clinical decision-making utility. PMID:23819618
Kostopoulos, Penelope; Petrides, Michael
2016-02-16
There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.
Lewis, James W.; Talkington, William J.; Tallaksen, Katherine C.; Frum, Chris A.
2012-01-01
Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds. PMID:22582038
Structured Activities in Perceptual Training to Aid Retention of Visual and Auditory Images.
ERIC Educational Resources Information Center
Graves, James W.; And Others
The experimental program in structured activities in perceptual training was said to have two main objectives: to train children in retention of visual and auditory images and to increase the children's motivation to learn. Eight boys and girls participated in the program for two hours daily for a 10-week period. The age range was 7.0 to 12.10…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Pinto, Marcos; Conklin, Heather M.; Li Chenghong
Purpose: The primary objective of this study was to determine whether children with localized ependymoma experience a decline in verbal or visual-auditory learning after conformal radiation therapy (CRT). The secondary objective was to investigate the impact of age and select clinical factors on learning before and after treatment. Methods and Materials: Learning in a sample of 71 patients with localized ependymoma was assessed with the California Verbal Learning Test (CVLT-C) and the Visual-Auditory Learning Test (VAL). Learning measures were administered before CRT, at 6 months, and then yearly for a total of 5 years. Results: There was no significant declinemore » on measures of verbal or visual-auditory learning after CRT; however, younger age, more surgeries, and cerebrospinal fluid shunting did predict lower scores at baseline. There were significant longitudinal effects (improved learning scores after treatment) among older children on the CVLT-C and children that did not receive pre-CRT chemotherapy on the VAL. Conclusion: There was no evidence of global decline in learning after CRT in children with localized ependymoma. Several important implications from the findings include the following: (1) identification of and differentiation among variables with transient vs. long-term effects on learning, (2) demonstration that children treated with chemotherapy before CRT had greater risk of adverse visual-auditory learning performance, and (3) establishment of baseline and serial assessment as critical in ascertaining necessary sensitivity and specificity for the detection of modest effects.« less
Sousa, Ana Constantino; Didoné, Dayane Domeneghini; Sleifer, Pricila
2017-01-01
Introduction Preterm neonates are at risk of changes in their auditory system development, which explains the need for auditory monitoring of this population. The Auditory Steady-State Response (ASSR) is an objective method that allows obtaining the electrophysiological thresholds with greater applicability in neonatal and pediatric population. Objective The purpose of this study is to compare the ASSR thresholds in preterm and term infants evaluated during two stages. Method The study included 63 normal hearing neonates: 33 preterm and 30 term. They underwent assessment of ASSR in both ears simultaneously through insert phones in the frequencies of 500 to 4000Hz with the amplitude modulated from 77 to 103Hz. We presented the intensity at a decreasing level to detect the minimum level of responses. At 18 months, 26 of 33 preterm infants returned for the new assessment for ASSR and were compared with 30 full-term infants. We compared between groups according to gestational age. Results Electrophysiological thresholds were higher in preterm than in full-term neonates ( p < 0.05) at the first testing. There were no significant differences between ears and gender. At 18 months, there was no difference between groups ( p > 0.05) in all the variables described. Conclusion In the first evaluation preterm had higher thresholds in ASSR. There was no difference at 18 months of age, showing the auditory maturation of preterm infants throughout their development. PMID:28680486
Robson, Holly; Cloutman, Lauren; Keidel, James L; Sage, Karen; Drakesmith, Mark; Welbourne, Stephen
2014-10-01
Auditory discrimination is significantly impaired in Wernicke's aphasia (WA) and thought to be causatively related to the language comprehension impairment which characterises the condition. This study used mismatch negativity (MMN) to investigate the neural responses corresponding to successful and impaired auditory discrimination in WA. Behavioural auditory discrimination thresholds of consonant-vowel-consonant (CVC) syllables and pure tones (PTs) were measured in WA (n = 7) and control (n = 7) participants. Threshold results were used to develop multiple deviant MMN oddball paradigms containing deviants which were either perceptibly or non-perceptibly different from the standard stimuli. MMN analysis investigated differences associated with group, condition and perceptibility as well as the relationship between MMN responses and comprehension (within which behavioural auditory discrimination profiles were examined). MMN waveforms were observable to both perceptible and non-perceptible auditory changes. Perceptibility was only distinguished by MMN amplitude in the PT condition. The WA group could be distinguished from controls by an increase in MMN response latency to CVC stimuli change. Correlation analyses displayed a relationship between behavioural CVC discrimination and MMN amplitude in the control group, where greater amplitude corresponded to better discrimination. The WA group displayed the inverse effect; both discrimination accuracy and auditory comprehension scores were reduced with increased MMN amplitude. In the WA group, a further correlation was observed between the lateralisation of MMN response and CVC discrimination accuracy; the greater the bilateral involvement the better the discrimination accuracy. The results from this study provide further evidence for the nature of auditory comprehension impairment in WA and indicate that the auditory discrimination deficit is grounded in a reduced ability to engage in efficient hierarchical processing and the construction of invariant auditory objects. Correlation results suggest that people with chronic WA may rely on an inefficient, noisy right hemisphere auditory stream when attempting to process speech stimuli.
Cooperative dynamics in auditory brain response
NASA Astrophysics Data System (ADS)
Kwapień, J.; DrożdŻ, S.; Liu, L. C.; Ioannides, A. A.
1998-11-01
Simultaneous estimates of activity in the left and right auditory cortex of five normal human subjects were extracted from multichannel magnetoencephalography recordings. Left, right, and binaural stimulations were used, in separate runs, for each subject. The resulting time series of left and right auditory cortex activity were analyzed using the concept of mutual information. The analysis constitutes an objective method to address the nature of interhemispheric correlations in response to auditory stimulations. The results provide clear evidence of the occurrence of such correlations mediated by a direct information transport, with clear laterality effects: as a rule, the contralateral hemisphere leads by 10-20 ms, as can be seen in the average signal. The strength of the interhemispheric coupling, which cannot be extracted from the average data, is found to be highly variable from subject to subject, but remarkably stable for each subject.
Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex
Romanski, L. M.; Tian, B.; Fritz, J.; Mishkin, M.; Goldman-Rakic, P. S.; Rauschecker, J. P.
2009-01-01
‘What’ and ‘where’ visual streams define ventrolateral object and dorsolateral spatial processing domains in the prefrontal cortex of nonhuman primates. We looked for similar streams for auditory–prefrontal connections in rhesus macaques by combining microelectrode recording with anatomical tract-tracing. Injection of multiple tracers into physiologically mapped regions AL, ML and CL of the auditory belt cortex revealed that anterior belt cortex was reciprocally connected with the frontal pole (area 10), rostral principal sulcus (area 46) and ventral prefrontal regions (areas 12 and 45), whereas the caudal belt was mainly connected with the caudal principal sulcus (area 46) and frontal eye fields (area 8a). Thus separate auditory streams originate in caudal and rostral auditory cortex and target spatial and non-spatial domains of the frontal lobe, respectively. PMID:10570492
Hoover, Eric C; Souza, Pamela E; Gallun, Frederick J
2017-04-01
Auditory complaints following mild traumatic brain injury (MTBI) are common, but few studies have addressed the role of auditory temporal processing in speech recognition complaints. In this study, deficits understanding speech in a background of speech noise following MTBI were evaluated with the goal of comparing the relative contributions of auditory and nonauditory factors. A matched-groups design was used in which a group of listeners with a history of MTBI were compared to a group matched in age and pure-tone thresholds, as well as a control group of young listeners with normal hearing (YNH). Of the 33 listeners who participated in the study, 13 were included in the MTBI group (mean age = 46.7 yr), 11 in the Matched group (mean age = 49 yr), and 9 in the YNH group (mean age = 20.8 yr). Speech-in-noise deficits were evaluated using subjective measures as well as monaural word (Words-in-Noise test) and sentence (Quick Speech-in-Noise test) tasks, and a binaural spatial release task. Performance on these measures was compared to psychophysical tasks that evaluate monaural and binaural temporal fine-structure tasks and spectral resolution. Cognitive measures of attention, processing speed, and working memory were evaluated as possible causes of differences between MTBI and Matched groups that might contribute to speech-in-noise perception deficits. A high proportion of listeners in the MTBI group reported difficulty understanding speech in noise (84%) compared to the Matched group (9.1%), and listeners who reported difficulty were more likely to have abnormal results on objective measures of speech in noise. No significant group differences were found between the MTBI and Matched listeners on any of the measures reported, but the number of abnormal tests differed across groups. Regression analysis revealed that a combination of auditory and auditory processing factors contributed to monaural speech-in-noise scores, but the benefit of spatial separation was related to a combination of working memory and peripheral auditory factors across all listeners in the study. The results of this study are consistent with previous findings that a subset of listeners with MTBI has objective auditory deficits. Speech-in-noise performance was related to a combination of auditory and nonauditory factors, confirming the important role of audiology in MTBI rehabilitation. Further research is needed to evaluate the prevalence and causal relationship of auditory deficits following MTBI. American Academy of Audiology
Liu, Yung-Ching; Jhuang, Jing-Wun
2012-07-01
A driving simulator study was conducted to evaluate the effects of five in-vehicle warning information displays upon drivers' emergent response and decision performance. These displays include visual display, auditory displays with and without spatial compatibility, hybrid displays in both visual and auditory format with and without spatial compatibility. Thirty volunteer drivers were recruited to perform various tasks that involved driving, stimulus-response, divided attention and stress rating. Results show that for displays of single-modality, drivers benefited more when coping with visual display of warning information than auditory display with or without spatial compatibility. However, auditory display with spatial compatibility significantly improved drivers' performance in reacting to the divided attention task and making accurate S-R task decision. Drivers' best performance results were obtained for hybrid display with spatial compatibility. Hybrid displays enabled drivers to respond the fastest and achieve the best accuracy in both S-R and divided attention tasks. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Henshaw, Helen; Ferguson, Melanie A.
2013-01-01
Background Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss. Objective This systematic review (PROSPERO 2011: CRD42011001406) evaluated the published evidence-base for the efficacy of individual computer-based auditory training to improve speech intelligibility, cognition and communication abilities in adults with hearing loss, with or without hearing aids or cochlear implants. Methods A systematic search of eight databases and key journals identified 229 articles published since 1996, 13 of which met the inclusion criteria. Data were independently extracted and reviewed by the two authors. Study quality was assessed using ten pre-defined scientific and intervention-specific measures. Results Auditory training resulted in improved performance for trained tasks in 9/10 articles that reported on-task outcomes. Although significant generalisation of learning was shown to untrained measures of speech intelligibility (11/13 articles), cognition (1/1 articles) and self-reported hearing abilities (1/2 articles), improvements were small and not robust. Where reported, compliance with computer-based auditory training was high, and retention of learning was shown at post-training follow-ups. Published evidence was of very-low to moderate study quality. Conclusions Our findings demonstrate that published evidence for the efficacy of individual computer-based auditory training for adults with hearing loss is not robust and therefore cannot be reliably used to guide intervention at this time. We identify a need for high-quality evidence to further examine the efficacy of computer-based auditory training for people with hearing loss. PMID:23675431
Emerging technologies with potential for objectively evaluating speech recognition skills.
Rawool, Vishakha Waman
2016-01-01
Work-related exposure to noise and other ototoxins can cause damage to the cochlea, synapses between the inner hair cells, the auditory nerve fibers, and higher auditory pathways, leading to difficulties in recognizing speech. Procedures designed to determine speech recognition scores (SRS) in an objective manner can be helpful in disability compensation cases where the worker claims to have poor speech perception due to exposure to noise or ototoxins. Such measures can also be helpful in determining SRS in individuals who cannot provide reliable responses to speech stimuli, including patients with Alzheimer's disease, traumatic brain injuries, and infants with and without hearing loss. Cost-effective neural monitoring hardware and software is being rapidly refined due to the high demand for neurogaming (games involving the use of brain-computer interfaces), health, and other applications. More specifically, two related advances in neuro-technology include relative ease in recording neural activity and availability of sophisticated analysing techniques. These techniques are reviewed in the current article and their applications for developing objective SRS procedures are proposed. Issues related to neuroaudioethics (ethics related to collection of neural data evoked by auditory stimuli including speech) and neurosecurity (preservation of a person's neural mechanisms and free will) are also discussed.
Translational control of auditory imprinting and structural plasticity by eIF2α
Batista, Gervasio; Johnson, Jennifer Leigh; Dominguez, Elena; Costa-Mattioli, Mauro; Pena, Jose L
2016-01-01
The formation of imprinted memories during a critical period is crucial for vital behaviors, including filial attachment. Yet, little is known about the underlying molecular mechanisms. Using a combination of behavior, pharmacology, in vivo surface sensing of translation (SUnSET) and DiOlistic labeling we found that, translational control by the eukaryotic translation initiation factor 2 alpha (eIF2α) bidirectionally regulates auditory but not visual imprinting and related changes in structural plasticity in chickens. Increasing phosphorylation of eIF2α (p-eIF2α) reduces translation rates and spine plasticity, and selectively impairs auditory imprinting. By contrast, inhibition of an eIF2α kinase or blocking the translational program controlled by p-eIF2α enhances auditory imprinting. Importantly, these manipulations are able to reopen the critical period. Thus, we have identified a translational control mechanism that selectively underlies auditory imprinting. Restoring translational control of eIF2α holds the promise to rejuvenate adult brain plasticity and restore learning and memory in a variety of cognitive disorders. DOI: http://dx.doi.org/10.7554/eLife.17197.001 PMID:28009255
Task-specific reorganization of the auditory cortex in deaf humans
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-01
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964
Task-specific reorganization of the auditory cortex in deaf humans.
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-24
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.
The impact of visual gaze direction on auditory object tracking.
Pomper, Ulrich; Chait, Maria
2017-07-05
Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.
Auditory Middle Latency Response and Phonological Awareness in Students with Learning Disabilities
Romero, Ana Carla Leite; Funayama, Carolina Araújo Rodrigues; Capellini, Simone Aparecida; Frizzo, Ana Claudia Figueiredo
2015-01-01
Introduction Behavioral tests of auditory processing have been applied in schools and highlight the association between phonological awareness abilities and auditory processing, confirming that low performance on phonological awareness tests may be due to low performance on auditory processing tests. Objective To characterize the auditory middle latency response and the phonological awareness tests and to investigate correlations between responses in a group of children with learning disorders. Methods The study included 25 students with learning disabilities. Phonological awareness and auditory middle latency response were tested with electrodes placed on the left and right hemispheres. The correlation between the measurements was performed using the Spearman rank correlation coefficient. Results There is some correlation between the tests, especially between the Pa component and syllabic awareness, where moderate negative correlation is observed. Conclusion In this study, when phonological awareness subtests were performed, specifically phonemic awareness, the students showed a low score for the age group, although for the objective examination, prolonged Pa latency in the contralateral via was observed. Negative weak to moderate correlation for Pa wave latency was observed, as was positive weak correlation for Na-Pa amplitude. PMID:26491479
Lew, Henry L; Lee, Eun Ha; Miyoshi, Yasushi; Chang, Douglas G; Date, Elaine S; Jerger, James F
2004-03-01
Because of the violent nature of traumatic brain injury, traumatic brain injury patients are susceptible to various types of trauma involving the auditory system. We report a case of a 55-yr-old man who presented with communication problems after traumatic brain injury. Initial results from behavioral audiometry and Weber/Rinne tests were not reliable because of poor cooperation. He was transferred to our service for inpatient rehabilitation, where review of the initial head computed tomographic scan showed only left temporal bone fracture. Brainstem auditory-evoked potential was then performed to evaluate his hearing function. The results showed bilateral absence of auditory-evoked responses, which strongly suggested bilateral deafness. This finding led to a follow-up computed tomographic scan, with focus on bilateral temporal bones. A subtle transverse fracture of the right temporal bone was then detected, in addition to the left temporal bone fracture previously identified. Like children with hearing impairment, traumatic brain injury patients may not be able to verbalize their auditory deficits in a timely manner. If hearing loss is suspected in a patient who is unable to participate in traditional behavioral audiometric testing, brainstem auditory-evoked potential may be an option for evaluating hearing dysfunction.
Petrini, Karin; Remark, Alicia; Smith, Louise; Nardini, Marko
2014-05-01
When visual information is available, human adults, but not children, have been shown to reduce sensory uncertainty by taking a weighted average of sensory cues. In the absence of reliable visual information (e.g. extremely dark environment, visual disorders), the use of other information is vital. Here we ask how humans combine haptic and auditory information from childhood. In the first experiment, adults and children aged 5 to 11 years judged the relative sizes of two objects in auditory, haptic, and non-conflicting bimodal conditions. In , different groups of adults and children were tested in non-conflicting and conflicting bimodal conditions. In , adults reduced sensory uncertainty by integrating the cues optimally, while children did not. In , adults and children used similar weighting strategies to solve audio-haptic conflict. These results suggest that, in the absence of visual information, optimal integration of cues for discrimination of object size develops late in childhood. © 2014 The Authors. Developmental Science Published by John Wiley & Sons Ltd.
An Objective Measurement of the Build-Up of Auditory Streaming and of Its Modulation by Attention
ERIC Educational Resources Information Center
Thompson, Sarah K.; Carlyon, Robert P.; Cusack, Rhodri
2011-01-01
Three experiments studied auditory streaming using sequences of alternating "ABA" triplets, where "A" and "B" were 50-ms tones differing in frequency by [delta]f semitones and separated by 75-ms gaps. Experiment 1 showed that detection of a short increase in the gap between a B tone and the preceding A tone, imposed on one ABA triplet, was better…
Beer, Jessica; Harris, Michael S.; Kronenberger, William G.; Holt, Rachael Frush; Pisoni, David B.
2012-01-01
Objective The objective of this study was to evaluate the development of functional auditory skills, language, and adaptive behavior in deaf children with cochlear implants (CI) who also have additional disabilities (AD). Design A two-group, pre-test versus post-test design was used. Study sample Comparisons were made between 23 children with CIs and ADs, and an age-matched comparison group of 23 children with CIs without ADs (No-AD). Assessments were obtained pre-CI and within 12 months post-CI. Results All but two deaf children with ADs improved in auditory skills using the IT-MAIS. Most deaf children in the AD group made progress in receptive but not expressive language using the Preschool Language Scale, but their language quotients were lower than the No-AD group. Five of eight children with ADs made progress in daily living skills and socialization skills; two made progress in motor skills. Children with ADs who did not make progress in language, did show progress in adaptive behavior. Conclusions Children with deafness and ADs made progress in functional auditory skills, receptive language, and adaptive behavior. Expanded assessment that includes adaptive functioning and multi-center collaboration is recommended to best determine benefits of implantation in areas of expected growth in this clinical population. PMID:22509948
Exploring the Relationship between Physiological Measures of Cochlear and Brainstem Function
Dhar, S.; Abel, R.; Hornickel, J.; Nicol, T.; Skoe, E.; Zhao, W.; Kraus, N.
2009-01-01
Objective Otoacoustic emissions and the speech-evoked auditory brainstem response are objective indices of peripheral auditory physiology and are used clinically for assessing hearing function. While each measure has been extensively explored, their interdependence and the relationships between them remain relatively unexplored. Methods Distortion product otoacoustic emissions (DPOAE) and speech-evoked auditory brainstem responses (sABR) were recorded from 28 normal-hearing adults. Through correlational analyses, DPOAE characteristics were compared to measures of sABR timing and frequency encoding. Data were organized into two DPOAE (Strength and Structure) and five brainstem (Onset, Spectrotemporal, Harmonics, Envelope Boundary, Pitch) composite measures. Results DPOAE Strength shows significant relationships with sABR Spectrotemporal and Harmonics measures. DPOAE Structure shows significant relationships with sABR Envelope Boundary. Neither DPOAE Strength nor Structure is related to sABR Pitch. Conclusions The results of the present study show that certain aspects of the speech-evoked auditory brainstem responses are related to, or covary with, cochlear function as measured by distortion product otoacoustic emissions. Significance These results form a foundation for future work in clinical populations. Analyzing cochlear and brainstem function in parallel in different clinical populations will provide a more sensitive clinical battery for identifying the locus of different disorders (e.g., language based learning impairments, hearing impairment). PMID:19346159
Da Costa, Sandra; Bourquin, Nathalie M.-P.; Knebel, Jean-François; Saenz, Melissa; van der Zwaag, Wietske; Clarke, Stephanie
2015-01-01
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds. PMID:25938430
Early multisensory interactions affect the competition among multiple visual objects.
Van der Burg, Erik; Talsma, Durk; Olivers, Christian N L; Hickey, Clayton; Theeuwes, Jan
2011-04-01
In dynamic cluttered environments, audition and vision may benefit from each other in determining what deserves further attention and what does not. We investigated the underlying neural mechanisms responsible for attentional guidance by audiovisual stimuli in such an environment. Event-related potentials (ERPs) were measured during visual search through dynamic displays consisting of line elements that randomly changed orientation. Search accuracy improved when a target orientation change was synchronized with an auditory signal as compared to when the auditory signal was absent or synchronized with a distractor orientation change. The ERP data show that behavioral benefits were related to an early multisensory interaction over left parieto-occipital cortex (50-60 ms post-stimulus onset), which was followed by an early positive modulation (80-100 ms) over occipital and temporal areas contralateral to the audiovisual event, an enhanced N2pc (210-250 ms), and a contralateral negative slow wave (CNSW). The early multisensory interaction was correlated with behavioral search benefits, indicating that participants with a strong multisensory interaction benefited the most from the synchronized auditory signal. We suggest that an auditory signal enhances the neural response to a synchronized visual event, which increases the chances of selection in a multiple object environment. Copyright © 2010 Elsevier Inc. All rights reserved.
Prediction of Auditory and Visual P300 Brain-Computer Interface Aptitude
Halder, Sebastian; Hammer, Eva Maria; Kleih, Sonja Claudia; Bogdan, Martin; Rosenstiel, Wolfgang; Birbaumer, Niels; Kübler, Andrea
2013-01-01
Objective Brain-computer interfaces (BCIs) provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)) or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball. Methods Forty healthy participants performed an electroencephalography (EEG) based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude. Results Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy. Conclusions Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection. Significance Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population. PMID:23457444
Bottom-up influences of voice continuity in focusing selective auditory attention
Bressler, Scott; Masud, Salwa; Bharadwaj, Hari; Shinn-Cunningham, Barbara
2015-01-01
Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the “unit” on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings. PMID:24633644
Bottom-up influences of voice continuity in focusing selective auditory attention.
Bressler, Scott; Masud, Salwa; Bharadwaj, Hari; Shinn-Cunningham, Barbara
2014-01-01
Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the "unit" on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings.
Liu, Juan; Ando, Hiroshi
2016-01-01
Most real-world events stimulate multiple sensory modalities simultaneously. Usually, the stiffness of an object is perceived haptically. However, auditory signals also contain stiffness-related information, and people can form impressions of stiffness from the different impact sounds of metal, wood, or glass. To understand whether there is any interaction between auditory and haptic stiffness perception, and if so, whether the inferred material category is the most relevant auditory information, we conducted experiments using a force-feedback device and the modal synthesis method to present haptic stimuli and impact sound in accordance with participants’ actions, and to modulate low-level acoustic parameters, i.e., frequency and damping, without changing the inferred material categories of sound sources. We found that metal sounds consistently induced an impression of stiffer surfaces than did drum sounds in the audio-only condition, but participants haptically perceived surfaces with modulated metal sounds as significantly softer than the same surfaces with modulated drum sounds, which directly opposes the impression induced by these sounds alone. This result indicates that, although the inferred material category is strongly associated with audio-only stiffness perception, low-level acoustic parameters, especially damping, are more tightly integrated with haptic signals than the material category is. Frequency played an important role in both audio-only and audio-haptic conditions. Our study provides evidence that auditory information influences stiffness perception differently in unisensory and multisensory tasks. Furthermore, the data demonstrated that sounds with higher frequency and/or shorter decay time tended to be judged as stiffer, and contact sounds of stiff objects had no effect on the haptic perception of soft surfaces. We argue that the intrinsic physical relationship between object stiffness and acoustic parameters may be applied as prior knowledge to achieve robust estimation of stiffness in multisensory perception. PMID:27902718
Using spoken words to guide open-ended category formation.
Chauhan, Aneesh; Seabra Lopes, Luís
2011-11-01
Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.
Lacerda, Clara Fonseca; Silva, Luciana Oliveira e; de Tavares Canto, Roberto Sérgio; Cheik, Nadia Carla
2012-01-01
Summary Introduction: The aging process provokes structural modifications and functional to it greets, compromising the postural control and central processing. Studies have boarded the necessity to identify to the harmful factors of risk to aged the auditory health and security in stricken aged by auditory deficits and with alterations of balance. Objective: To evaluate the effect of auditory prosthesis in the quality of life, the balance and the fear of fall in aged with bilateral auditory loss. Method: Carried through clinical and experimental study with 56 aged ones with sensorineural auditory loss, submitted to the use of auditory prosthesis of individual sonorous amplification (AASI). The aged ones had answered to the questionnaires of quality of life Short Form Health Survey (SF-36), Falls Efficacy International Scale- (FES-I) and the test of Berg Balance Scale (BBS). After 4 months, the aged ones that they adapted to the use of the AASI had been reevaluated. Results: It had 50% of adaptation of the aged ones to the AASI. It was observed that the masculine sex had greater difficulty in adapting to the auditory device and that the variable age, degree of loss, presence of humming and vertigo had not intervened with the adaptation to auditory prosthesis. It had improvement of the quality of life in the dominance of the State General Health (EGS) and Functional Capacity (CF) and of the humming, as well as the increase of the auto-confidence after adaptation of auditory prosthesis. Conclusion: The use of auditory prosthesis provided the improvement of the domains of the quality of life, what it reflected consequently in one better auto-confidence and in the long run in the reduction of the fear of fall in aged with sensorineural auditory loss. PMID:25991930
Pondé, Pedro H; de Sena, Eduardo P; Camprodon, Joan A; de Araújo, Arão Nogueira; Neto, Mário F; DiBiasi, Melany; Baptista, Abrahão Fontes; Moura, Lidia MVR; Cosmo, Camila
2017-01-01
Introduction Auditory hallucinations are defined as experiences of auditory perceptions in the absence of a provoking external stimulus. They are the most prevalent symptoms of schizophrenia with high capacity for chronicity and refractoriness during the course of disease. The transcranial direct current stimulation (tDCS) – a safe, portable, and inexpensive neuromodulation technique – has emerged as a promising treatment for the management of auditory hallucinations. Objective The aim of this study is to analyze the level of evidence in the literature available for the use of tDCS as a treatment for auditory hallucinations in schizophrenia. Methods A systematic review was performed, searching in the main electronic databases including the Cochrane Library and MEDLINE/PubMed. The searches were performed by combining descriptors, applying terms of the Medical Subject Headings (MeSH) of Descriptors of Health Sciences and descriptors contractions. PRISMA protocol was used as a guide and the terms used were the clinical outcomes (“Schizophrenia” OR “Auditory Hallucinations” OR “Auditory Verbal Hallucinations” OR “Psychosis”) searched together (“AND”) with interventions (“transcranial Direct Current Stimulation” OR “tDCS” OR “Brain Polarization”). Results Six randomized controlled trials that evaluated the effects of tDCS on the severity of auditory hallucinations in schizophrenic patients were selected. Analysis of the clinical results of these studies pointed toward incongruence in the information with regard to the therapeutic use of tDCS with a view to reducing the severity of auditory hallucinations in schizophrenia. Only three studies revealed a therapeutic benefit, manifested by reductions in severity and frequency of auditory verbal hallucinations in schizophrenic patients. Conclusion Although tDCS has shown promising results in reducing the severity of auditory hallucinations in schizophrenic patients, this technique cannot yet be used as a therapeutic alternative due to lack of studies with large sample sizes that portray the positive effects that have been described. PMID:28203084
Separating pitch chroma and pitch height in the human brain
Warren, J. D.; Uppenkamp, S.; Patterson, R. D.; Griffiths, T. D.
2003-01-01
Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas. PMID:12909719
Separating pitch chroma and pitch height in the human brain.
Warren, J D; Uppenkamp, S; Patterson, R D; Griffiths, T D
2003-08-19
Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas.
Electrically-evoked frequency-following response (EFFR) in the auditory brainstem of guinea pigs.
He, Wenxin; Ding, Xiuyong; Zhang, Ruxiang; Chen, Jing; Zhang, Daoxing; Wu, Xihong
2014-01-01
It is still a difficult clinical issue to decide whether a patient is a suitable candidate for a cochlear implant and to plan postoperative rehabilitation, especially for some special cases, such as auditory neuropathy. A partial solution to these problems is to preoperatively evaluate the functional integrity of the auditory neural pathways. For evaluating the strength of phase-locking of auditory neurons, which was not reflected in previous methods using electrically evoked auditory brainstem response (EABR), a new method for recording phase-locking related auditory responses to electrical stimulation, called the electrically evoked frequency-following response (EFFR), was developed and evaluated using guinea pigs. The main objective was to assess feasibility of the method by testing whether the recorded signals reflected auditory neural responses or artifacts. The results showed the following: 1) the recorded signals were evoked by neuron responses rather than by artifact; 2) responses evoked by periodic signals were significantly higher than those evoked by the white noise; 3) the latency of the responses fell in the expected range; 4) the responses decreased significantly after death of the guinea pigs; and 5) the responses decreased significantly when the animal was replaced by an electrical resistance. All of these results suggest the method was valid. Recording obtained using complex tones with a missing fundamental component and using pure tones with various frequencies were consistent with those obtained using acoustic stimulation in previous studies.
Baseline vestibular and auditory findings in a trial of post-concussive syndrome
Meehan, Anna; Searing, Elizabeth; Weaver, Lindell; Lewandowski, Andrew
2016-01-01
Previous studies have reported high rates of auditory and vestibular-balance deficits immediately following head injury. This study uses a comprehensive battery of assessments to characterize auditory and vestibular function in 71 U.S. military service members with chronic symptoms following mild traumatic brain injury that did not resolve with traditional interventions. The majority of the study population reported hearing loss (70%) and recent vestibular symptoms (83%). Central auditory deficits were most prevalent, with 58% of participants failing the SCAN3:A screening test and 45% showing abnormal responses on auditory steady-state response testing presented at a suprathreshold intensity. Only 17% of the participants had abnormal hearing (⟩25 dB hearing loss) based on the pure-tone average. Objective vestibular testing supported significant deficits in this population, regardless of whether the participant self-reported active symptoms. Composite score on the Sensory Organization Test was lower than expected from normative data (mean 69.6 ±vestibular tests, vestibulo-ocular reflex, central auditory dysfunction, mild traumatic brain injury, post-concussive symptoms, hearing15.6). High abnormality rates were found in funduscopy torsion (58%), oculomotor assessments (49%), ocular and cervical vestibular evoked myogenic potentials (46% and 33%, respectively), and monothermal calorics (40%). It is recommended that a full peripheral and central auditory, oculomotor, and vestibular-balance evaluation be completed on military service members who have sustained head trauma.
Visual influences on auditory spatial learning
King, Andrew J.
2008-01-01
The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967
The Role of the Auditory Brainstem in Processing Musically Relevant Pitch
Bidelman, Gavin M.
2013-01-01
Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain. PMID:23717294
Acute Inactivation of Primary Auditory Cortex Causes a Sound Localisation Deficit in Ferrets
Wood, Katherine C.; Town, Stephen M.; Atilgan, Huriye; Jones, Gareth P.
2017-01-01
The objective of this study was to demonstrate the efficacy of acute inactivation of brain areas by cooling in the behaving ferret and to demonstrate that cooling auditory cortex produced a localisation deficit that was specific to auditory stimuli. The effect of cooling on neural activity was measured in anesthetized ferret cortex. The behavioural effect of cooling was determined in a benchmark sound localisation task in which inactivation of primary auditory cortex (A1) is known to impair performance. Cooling strongly suppressed the spontaneous and stimulus-evoked firing rates of cortical neurons when the cooling loop was held at temperatures below 10°C, and this suppression was reversed when the cortical temperature recovered. Cooling of ferret auditory cortex during behavioural testing impaired sound localisation performance, with unilateral cooling producing selective deficits in the hemifield contralateral to cooling, and bilateral cooling producing deficits on both sides of space. The deficit in sound localisation induced by inactivation of A1 was not caused by motivational or locomotor changes since inactivation of A1 did not affect localisation of visual stimuli in the same context. PMID:28099489
The influence of cochlear implants on behaviour problems in deaf children.
Jiménez-Romero, Ma Salud
2015-01-01
This study seeks to analyse the relationship between behaviour problems in deaf children and their auditory and communication development subsequent to cochlear implantation and to examine the incidence of these problems in comparison to their hearing peers. This study uses an ex post facto prospective design with a sample of 208 Spanish children, of whom 104 were deaf subjects with cochlear implants. The first objective assesses the relationships between behaviour problems, auditory integration, and social and communication skills in the group of deaf children. The second compares the frequency and intensity of behaviour problems of the group of deaf children with their hearing peers. The correlation analysis showed a significant association between the internal index of behaviour problems and auditory integration and communication skills, such that deaf children with greater auditory and communication development had no behaviour problems. When comparing behaviour problems in deaf children versus their hearing peers, behavioural disturbances are significantly more frequent in the former. According to these findings, cochlear implants may not guarantee adequate auditory and communicative development that would normalise the behaviour of deaf children.
Auditory perception and the control of spatially coordinated action of deaf and hearing children.
Savelsbergh, G J; Netelenbos, J B; Whiting, H T
1991-03-01
From birth onwards, auditory stimulation directs and intensifies visual orientation behaviour. In deaf children, by definition, auditory perception cannot take place and cannot, therefore, make a contribution to visual orientation to objects approaching from outside the initial field of view. In experiment 1, a difference in catching ability is demonstrated between deaf and hearing children (10-13 years of age) when the ball approached from the periphery or from outside the field of view. No differences in catching ability between the two groups occurred when the ball approached from within the field of view. A second experiment was conducted in order to determine if differences in catching ability between deaf and hearing children could be attributed to execution of slow orientating movements and/or slow reaction time as a result of the auditory loss. The deaf children showed slower reaction times. No differences were found in movement times between deaf and hearing children. Overall, the findings suggest that a lack of auditory stimulation during development can lead to deficiencies in the coordination of actions such as catching which are both spatially and temporally constrained.
Significance of auditory and kinesthetic feedback to singers' pitch control.
Mürbe, Dirk; Pabst, Friedemann; Hofmann, Gert; Sundberg, Johan
2002-03-01
An accurate control of fundamental frequency (F0) is required from singers. This control relies on auditory and kinesthetic feedback. However, a loud accompaniment may mask the auditory feedback, leaving the singers to rely on kinesthetic feedback. The object of the present study was to estimate the significance of auditory and kinesthetic feedback to pitch control in 28 students beginning a professional solo singing education. The singers sang an ascending and descending triad pattern covering their entire pitch range with and without masking noise in legato and staccato and in a slow and a fast tempo. F0 was measured by means of a computer program. The interval sizes between adjacent tones were determined and their departures from equally tempered tuning were calculated. The deviations from this tuning were used as a measure of the accuracy of intonation. Statistical analysis showed a significant effect of masking that amounted to a mean impairment of pitch accuracy by 14 cent across all subjects. Furthermore, significant effects were found of tempo as well as of the staccato/legato conditions. The results indicate that auditory feedback contributes significantly to singers' control of pitch.
Attentional Gain Control of Ongoing Cortical Speech Representations in a “Cocktail Party”
Kerlin, Jess R.; Shahin, Antoine J.; Miller, Lee M.
2010-01-01
Normal listeners possess the remarkable perceptual ability to select a single speech stream among many competing talkers. However, few studies of selective attention have addressed the unique nature of speech as a temporally extended and complex auditory object. We hypothesized that sustained selective attention to speech in a multi-talker environment would act as gain control on the early auditory cortical representations of speech. Using high-density electroencephalography and a template-matching analysis method, we found selective gain to the continuous speech content of an attended talker, greatest at a frequency of 4–8 Hz, in auditory cortex. In addition, the difference in alpha power (8–12 Hz) at parietal sites across hemispheres indicated the direction of auditory attention to speech, as has been previously found in visual tasks. The strength of this hemispheric alpha lateralization, in turn, predicted an individual’s attentional gain of the cortical speech signal. These results support a model of spatial speech stream segregation, mediated by a supramodal attention mechanism, enabling selection of the attended representation in auditory cortex. PMID:20071526
Martins, Kelly Vasconcelos Chaves; Gil, Daniela
2017-01-01
Introduction The registry of the component P1 of the cortical auditory evoked potential has been widely used to analyze the behavior of auditory pathways in response to cochlear implant stimulation. Objective To determine the influence of aural rehabilitation in the parameters of latency and amplitude of the P1 cortical auditory evoked potential component elicited by simple auditory stimuli (tone burst) and complex stimuli (speech) in children with cochlear implants. Method The study included six individuals of both genders aged 5 to 10 years old who have been cochlear implant users for at least 12 months, and who attended auditory rehabilitation with an aural rehabilitation therapy approach. Participants were submitted to research of the cortical auditory evoked potential at the beginning of the study and after 3 months of aural rehabilitation. To elicit the responses, simple stimuli (tone burst) and complex stimuli (speech) were used and presented in free field at 70 dB HL. The results were statistically analyzed, and both evaluations were compared. Results There was no significant difference between the type of eliciting stimulus of the cortical auditory evoked potential for the latency and the amplitude of P1. There was a statistically significant difference in the P1 latency between the evaluations for both stimuli, with reduction of the latency in the second evaluation after 3 months of auditory rehabilitation. There was no statistically significant difference regarding the amplitude of P1 under the two types of stimuli or in the two evaluations. Conclusion A decrease in latency of the P1 component elicited by both simple and complex stimuli was observed within a three-month interval in children with cochlear implant undergoing aural rehabilitation. PMID:29018498
Stochastic correlative firing for figure-ground segregation.
Chen, Zhe
2005-03-01
Segregation of sensory inputs into separate objects is a central aspect of perception and arises in all sensory modalities. The figure-ground segregation problem requires identifying an object of interest in a complex scene, in many cases given binaural auditory or binocular visual observations. The computations required for visual and auditory figure-ground segregation share many common features and can be cast within a unified framework. Sensory perception can be viewed as a problem of optimizing information transmission. Here we suggest a stochastic correlative firing mechanism and an associative learning rule for figure-ground segregation in several classic sensory perception tasks, including the cocktail party problem in binaural hearing, binocular fusion of stereo images, and Gestalt grouping in motion perception.
ERIC Educational Resources Information Center
Wong, Jason H.; Peterson, Matthew S.; Thompson, James C.
2008-01-01
The capacity of visual working memory was examined when complex objects from different categories were remembered. Previous studies have not examined how visual similarity affects object memory, though it has long been known that similar-sounding phonological information interferes with rehearsal in auditory working memory. Here, experiments…
Frequency tagging to track the neural processing of contrast in fast, continuous sound sequences.
Nozaradan, Sylvie; Mouraux, André; Cousineau, Marion
2017-07-01
The human auditory system presents a remarkable ability to detect rapid changes in fast, continuous acoustic sequences, as best illustrated in speech and music. However, the neural processing of rapid auditory contrast remains largely unclear, probably due to the lack of methods to objectively dissociate the response components specifically related to the contrast from the other components in response to the sequence of fast continuous sounds. To overcome this issue, we tested a novel use of the frequency-tagging approach allowing contrast-specific neural responses to be tracked based on their expected frequencies. The EEG was recorded while participants listened to 40-s sequences of sounds presented at 8Hz. A tone or interaural time contrast was embedded every fifth sound (AAAAB), such that a response observed in the EEG at exactly 8 Hz/5 (1.6 Hz) or harmonics should be the signature of contrast processing by neural populations. Contrast-related responses were successfully identified, even in the case of very fine contrasts. Moreover, analysis of the time course of the responses revealed a stable amplitude over repetitions of the AAAAB patterns in the sequence, except for the response to perceptually salient contrasts that showed a buildup and decay across repetitions of the sounds. Overall, this new combination of frequency-tagging with an oddball design provides a valuable complement to the classic, transient, evoked potentials approach, especially in the context of rapid auditory information. Specifically, we provide objective evidence on the neural processing of contrast embedded in fast, continuous sound sequences. NEW & NOTEWORTHY Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia might be an impaired processing of fast auditory changes, highlighting how the encoding of rapid acoustic information is critical for auditory communication. Here, we present a novel electrophysiological approach to capture in humans neural markers of contrasts in fast continuous tone sequences. Contrast-specific responses were successfully identified, even for very fine contrasts, providing direct insight on the encoding of rapid auditory information. Copyright © 2017 the American Physiological Society.
Using Embryology Screencasts: A Useful Addition to the Student Learning Experience?
ERIC Educational Resources Information Center
Evans, Darrell J. R.
2011-01-01
Although podcasting has been a well used resource format in the last few years as a way of improving the student learning experience, the inclusion of enhanced audiovisual formats such as screencasts has been less used, despite the advantage that they work well for both visual and auditory learners. This study examines the use of and student…
Jin, Yecheng; Ren, Naixia; Li, Shiwei; Fu, Xiaolong; Sun, Xiaoyang; Men, Yuqin; Xu, Zhigang; Zhang, Jian; Xie, Yue; Xia, Ming; Gao, Jiangang
2016-06-03
Hair cells (HCs) are mechanosensors that play crucial roles in perceiving sound, acceleration, and fluid motion. The precise architecture of the auditory epithelium and its repair after HC loss is indispensable to the function of organ of Corti (OC). In this study, we showed that Brg1 was highly expressed in auditory HCs. Specific deletion of Brg1 in postnatal HCs resulted in rapid HC degeneration and profound deafness in mice. Further experiments showed that cell-intrinsic polarity of HCs was abolished, docking of outer hair cells (OHCs) by Deiter's cells (DCs) failed, and scar formation in the reticular lamina was deficient. We demonstrated that Brg1 ablation disrupted the Gαi/Insc/LGN and aPKC asymmetric distributions, without overt effects on the core planer cell polarity (PCP) pathway. We also demonstrated that Brg1-deficient HCs underwent apoptosis, and that leakage in the reticular lamina caused by deficient scar formation shifted the mode of OHC death from apoptosis to necrosis. Together, these data demonstrated a requirement for Brg1 activity in HC development and suggested a role for Brg1 in the proper cellular structure formation of HCs.
Lane, S D; Clow, J K; Innis, A; Critchfield, T S
1998-01-01
This study employed a stimulus-class rating procedure to explore whether stimulus equivalence and stimulus generalization can combine to promote the formation of open-ended categories incorporating cross-modal stimuli. A pretest of simple auditory discrimination indicated that subjects (college students) could discriminate among a range of tones used in the main study. Before beginning the main study, 10 subjects learned to use a rating procedure for categorizing sets of stimuli as class consistent or class inconsistent. After completing conditional discrimination training with new stimuli (shapes and tones), the subjects demonstrated the formation of cross-modal equivalence classes. Subsequently, the class-inclusion rating procedure was reinstituted, this time with cross-modal sets of stimuli drawn from the equivalence classes. On some occasions, the tones of the equivalence classes were replaced by novel tones. The probability that these novel sets would be rated as class consistent was generally a function of the auditory distance between the novel tone and the tone that was explicitly included in the equivalence class. These data extend prior work on generalization of equivalence classes, and support the role of operant processes in human category formation. PMID:9821680
Leite, Renata Aparecida; Magliaro, Fernanda Cristina Leite; Raimundo, Jeziela Cristina; Bento, Ricardo Ferreira; Matas, Carla Gentile
2018-02-19
The objective of this study was to compare long-latency auditory evoked potentials before and after hearing aid fittings in children with sensorineural hearing loss compared with age-matched children with normal hearing. Thirty-two subjects of both genders aged 7 to 12 years participated in this study and were divided into two groups as follows: 14 children with normal hearing were assigned to the control group (mean age 9 years and 8 months), and 18 children with mild to moderate symmetrical bilateral sensorineural hearing loss were assigned to the study group (mean age 9 years and 2 months). The children underwent tympanometry, pure tone and speech audiometry and long-latency auditory evoked potential testing with speech and tone burst stimuli. The groups were assessed at three time points. The study group had a lower percentage of positive responses, lower P1-N1 and P2-N2 amplitudes (speech and tone burst), and increased latencies for the P1 and P300 components following the tone burst stimuli. They also showed improvements in long-latency auditory evoked potentials (with regard to both the amplitude and presence of responses) after hearing aid use. Alterations in the central auditory pathways can be identified using P1-N1 and P2-N2 amplitude components, and the presence of these components increases after a short period of auditory stimulation (hearing aid use). These findings emphasize the importance of using these amplitude components to monitor the neuroplasticity of the central auditory nervous system in hearing aid users.
[Auditory rehabilitation programmes for adults: what do we know about their effectiveness?].
Cardemil, Felipe; Aguayo, Lorena; Fuente, Adrian
2014-01-01
Hearing loss ranks third among the health conditions that involve disability-adjusted life years. Hearing aids are the most commonly used treatment option in people with hearing loss. However, a number of auditory rehabilitation programmes have been developed with the aim of improving communicative abilities in people with hearing loss. The objective of this review was to determine the effectiveness of auditory rehabilitation programmes focused on communication strategies. This was a narrative revision. A literature search using PUBMED was carried out. This search included systematic reviews investigating the effectiveness of auditory training and individual and group auditory rehabilitation programmes with the main focus on counselling and communicative strategies for adults with hearing loss. Each study was analysed in terms of the type of intervention used and the results obtained. Three articles were identified: one article about the effectiveness of auditory training programmes and 2 systematic reviews that investigated the effectiveness of communicative programmes in adults with hearing loss. The "Active Communication Education" programme appears to be an effective group programme of auditory rehabilitation that may be used with older Spanish-speaking adults. The utility of hearing aid fitting and communicative programmes as rehabilitation options are associated with improvements in social participation and quality of life in patients with hearing loss, especially group auditory rehabilitation programmes, which seem to have good potential for reducing activity limitations and social participation restrictions, and thus for improving patient quality of life. Copyright © 2013 Elsevier España, S.L. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.
Behroozmand, Roozbeh; Karvelis, Laura; Liu, Hanjun; Larson, Charles R.
2009-01-01
Objective The present study investigated whether self-vocalization enhances auditory neural responsiveness to voice pitch feedback perturbation and how this vocalization-induced neural modulation can be affected by the extent of the feedback deviation. Method Event related potentials (ERPs) were recorded in 15 subjects in response to +100, +200 and +500 cents pitch-shifted voice auditory feedback during active vocalization and passive listening to the playback of the self-produced vocalizations. Result The amplitude of the evoked P1 (latency: 73.51 ms) and P2 (latency: 199.55 ms) ERP components in response to feedback perturbation were significantly larger during vocalization than listening. The difference between P2 peak amplitudes during vocalization vs. listening was shown to be significantly larger for +100 than +500 cents stimulus. Conclusion Results indicate that the human auditory cortex is more responsive to voice F0 feedback perturbations during vocalization than passive listening. Greater vocalization-induced enhancement of the auditory responsiveness to smaller feedback perturbations may imply that the audio-vocal system detects and corrects for errors in vocal production that closely match the expected vocal output. Significance Findings of this study support previous suggestions regarding the enhanced auditory sensitivity to feedback alterations during self-vocalization, which may serve the purpose of feedback-based monitoring of one’s voice. PMID:19520602
NASA Astrophysics Data System (ADS)
Markovitz, Craig D.; Hogan, Patrick S.; Wesen, Kyle A.; Lim, Hubert H.
2015-04-01
Objective. The corticofugal system can alter coding along the ascending sensory pathway. Within the auditory system, electrical stimulation of the auditory cortex (AC) paired with a pure tone can cause egocentric shifts in the tuning of auditory neurons, making them more sensitive to the pure tone frequency. Since tinnitus has been linked with hyperactivity across auditory neurons, we sought to develop a new neuromodulation approach that could suppress a wide range of neurons rather than enhance specific frequency-tuned neurons. Approach. We performed experiments in the guinea pig to assess the effects of cortical stimulation paired with broadband noise (PN-Stim) on ascending auditory activity within the central nucleus of the inferior colliculus (CNIC), a widely studied region for AC stimulation paradigms. Main results. All eight stimulated AC subregions induced extensive suppression of activity across the CNIC that was not possible with noise stimulation alone. This suppression built up over time and remained after the PN-Stim paradigm. Significance. We propose that the corticofugal system is designed to decrease the brain’s input gain to irrelevant stimuli and PN-Stim is able to artificially amplify this effect to suppress neural firing across the auditory system. The PN-Stim concept may have potential for treating tinnitus and other neurological disorders.
Gardner-Berry, Kirsty; Chang, Hsiuwen; Ching, Teresa Y. C.; Hou, Sanna
2016-01-01
With the introduction of newborn hearing screening, infants are being diagnosed with hearing loss during the first few months of life. For infants with a sensory/neural hearing loss (SNHL), the audiogram can be estimated objectively using auditory brainstem response (ABR) testing and hearing aids prescribed accordingly. However, for infants with auditory neuropathy spectrum disorder (ANSD) due to the abnormal/absent ABR waveforms, alternative measures of auditory function are needed to assess the need for amplification and evaluate whether aided benefit has been achieved. Cortical auditory evoked potentials (CAEPs) are used to assess aided benefit in infants with hearing loss; however, there is insufficient information regarding the relationship between stimulus audibility and CAEP detection rates. It is also not clear whether CAEP detection rates differ between infants with SNHL and infants with ANSD. This study involved retrospective collection of CAEP, hearing threshold, and hearing aid gain data to investigate the relationship between stimulus audibility and CAEP detection rates. The results demonstrate that increases in stimulus audibility result in an increase in detection rate. For the same range of sensation levels, there was no difference in the detection rates between infants with SNHL and ANSD. PMID:27587922
Azar, Ali; Piccinelli, Chiara; Brown, Helen; Headon, Denis; Cheeseman, Michael
2016-01-01
Hypohidrotic ectodermal dysplasia (HED) results from mutation of the EDA, EDAR or EDARADD genes and is characterized by reduced or absent eccrine sweat glands, hair follicles and teeth, and defective formation of salivary, mammary and craniofacial glands. Mouse models with HED also carry Eda, Edar or Edaradd mutations and have defects that map to the same structures. Patients with HED have ear, nose and throat disease, but this has not been investigated in mice bearing comparable genetic mutations. We report that otitis media, rhinitis and nasopharyngitis occur at high frequency in Eda and Edar mutant mice and explore the pathogenic mechanisms related to glandular function, microbial and immune parameters in these lines. Nasopharynx auditory tube glands fail to develop in HED mutant mice and the functional implications include loss of lysozyme secretion, reduced mucociliary clearance and overgrowth of nasal commensal bacteria accompanied by neutrophil exudation. Heavy nasopharynx foreign body load and loss of gland protection alters the auditory tube gating function and the auditory tubes can become pathologically dilated. Accumulation of large foreign body particles in the bulla stimulates granuloma formation. Analysis of immune cell populations and myeloid cell function shows no evidence of overt immune deficiency in HED mutant mice. Our findings using HED mutant mice as a model for the human condition support the idea that ear and nose pathology in HED patients arises as a result of nasal and nasopharyngeal gland deficits, reduced mucociliary clearance and impaired auditory tube gating function underlies the pathological sequelae in the bulla. PMID:27378689
Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M
2014-02-01
Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human ERP and psychophysical music listening studies. Copyright © 2013 Elsevier B.V. All rights reserved.
Sequencing the Cortical Processing of Pitch-Evoking Stimuli using EEG Analysis and Source Estimation
Butler, Blake E.; Trainor, Laurel J.
2012-01-01
Cues to pitch include spectral cues that arise from tonotopic organization and temporal cues that arise from firing patterns of auditory neurons. fMRI studies suggest a common pitch center is located just beyond primary auditory cortex along the lateral aspect of Heschl’s gyrus, but little work has examined the stages of processing for the integration of pitch cues. Using electroencephalography, we recorded cortical responses to high-pass filtered iterated rippled noise (IRN) and high-pass filtered complex harmonic stimuli, which differ in temporal and spectral content. The two stimulus types were matched for pitch saliency, and a mismatch negativity (MMN) response was elicited by infrequent pitch changes. The P1 and N1 components of event-related potentials (ERPs) are thought to arise from primary and secondary auditory areas, respectively, and to result from simple feature extraction. MMN is generated in secondary auditory cortex and is thought to act on feature-integrated auditory objects. We found that peak latencies of both P1 and N1 occur later in response to IRN stimuli than to complex harmonic stimuli, but found no latency differences between stimulus types for MMN. The location of each ERP component was estimated based on iterative fitting of regional sources in the auditory cortices. The sources of both the P1 and N1 components elicited by IRN stimuli were located dorsal to those elicited by complex harmonic stimuli, whereas no differences were observed for MMN sources across stimuli. Furthermore, the MMN component was located between the P1 and N1 components, consistent with fMRI studies indicating a common pitch region in lateral Heschl’s gyrus. These results suggest that while the spectral and temporal processing of different pitch-evoking stimuli involves different cortical areas during early processing, by the time the object-related MMN response is formed, these cues have been integrated into a common representation of pitch. PMID:22740836
Plyler, Erin; Harkrider, Ashley W
2013-01-01
A boy, aged 2 1/2 yr, experienced sudden deterioration of speech and language abilities. He saw multiple medical professionals across 2 yr. By almost 5 yr, his vocabulary diminished from 50 words to 4, and he was referred to our speech and hearing center. The purpose of this study was to heighten awareness of Landau-Kleffner syndrome (LKS) and emphasize the importance of an objective test battery that includes serial auditory-evoked potentials (AEPs) to audiologists who often are on the front lines of diagnosis and treatment delivery when faced with a child experiencing unexplained loss of the use of speech and language. Clinical report. Interview revealed a family history of seizure disorder. Normal social behaviors were observed. Acoustic reflexes and otoacoustic emissions were consistent with normal peripheral auditory function. The child could not complete behavioral audiometric testing or auditory processing tests, so serial AEPs were used to examine central nervous system function. Normal auditory brainstem responses, a replicable Na and absent Pa of the middle latency responses, and abnormal slow cortical potentials suggested dysfunction of auditory processing at the cortical level. The child was referred to a neurologist, who confirmed LKS. At age 7 1/2 yr, after 2 1/2 yr of antiepileptic medications, electroencephalographic (EEG) and audiometric measures normalized. Presently, the child communicates manually with limited use of oral information. Audiologists often are one of the first professionals to assess children with loss of speech and language of unknown origin. Objective, noninvasive, serial AEPs are a simple and valuable addition to the central audiometric test battery when evaluating a child with speech and language regression. The inclusion of these tests will markedly increase the chance for early and accurate referral, diagnosis, and monitoring of a child with LKS which is imperative for a positive prognosis. American Academy of Audiology.
Bensoussan, Sandy; Cornil, Maude; Meunier-Salaün, Marie-Christine; Tallet, Céline
2016-01-01
Although animals rarely use only one sense to communicate, few studies have investigated the use of combinations of different signals between animals and humans. This study assessed for the first time the spontaneous reactions of piglets to human pointing gestures and voice in an object-choice task with a reward. Piglets (Sus scrofa domestica) mainly use auditory signals–individually or in combination with other signals—to communicate with their conspecifics. Their wide hearing range (42 Hz to 40.5 kHz) fits the range of human vocalisations (40 Hz to 1.5 kHz), which may induce sensitivity to the human voice. However, only their ability to use visual signals from humans, especially pointing gestures, has been assessed to date. The current study investigated the effects of signal type (visual, auditory and combined visual and auditory) and piglet experience on the piglets’ ability to locate a hidden food reward over successive tests. Piglets did not find the hidden reward at first presentation, regardless of the signal type given. However, they subsequently learned to use a combination of auditory and visual signals (human voice and static or dynamic pointing gestures) to successfully locate the reward in later tests. This learning process may result either from repeated presentations of the combination of static gestures and auditory signals over successive tests, or from transitioning from static to dynamic pointing gestures, again over successive tests. Furthermore, piglets increased their chance of locating the reward either if they did not go straight to a bowl after entering the test area or if they stared at the experimenter before visiting it. Piglets were not able to use the voice direction alone, indicating that a combination of signals (pointing and voice direction) is necessary. Improving our communication with animals requires adapting to their individual sensitivity to human-given signals. PMID:27792731
Bensoussan, Sandy; Cornil, Maude; Meunier-Salaün, Marie-Christine; Tallet, Céline
2016-01-01
Although animals rarely use only one sense to communicate, few studies have investigated the use of combinations of different signals between animals and humans. This study assessed for the first time the spontaneous reactions of piglets to human pointing gestures and voice in an object-choice task with a reward. Piglets (Sus scrofa domestica) mainly use auditory signals-individually or in combination with other signals-to communicate with their conspecifics. Their wide hearing range (42 Hz to 40.5 kHz) fits the range of human vocalisations (40 Hz to 1.5 kHz), which may induce sensitivity to the human voice. However, only their ability to use visual signals from humans, especially pointing gestures, has been assessed to date. The current study investigated the effects of signal type (visual, auditory and combined visual and auditory) and piglet experience on the piglets' ability to locate a hidden food reward over successive tests. Piglets did not find the hidden reward at first presentation, regardless of the signal type given. However, they subsequently learned to use a combination of auditory and visual signals (human voice and static or dynamic pointing gestures) to successfully locate the reward in later tests. This learning process may result either from repeated presentations of the combination of static gestures and auditory signals over successive tests, or from transitioning from static to dynamic pointing gestures, again over successive tests. Furthermore, piglets increased their chance of locating the reward either if they did not go straight to a bowl after entering the test area or if they stared at the experimenter before visiting it. Piglets were not able to use the voice direction alone, indicating that a combination of signals (pointing and voice direction) is necessary. Improving our communication with animals requires adapting to their individual sensitivity to human-given signals.
Teng, Santani
2017-01-01
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019
Cichy, Radoslaw Martin; Teng, Santani
2017-02-19
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Cross-modal versus within-modal recall: differences in behavioral and brain responses.
Butler, Andrew J; James, Karin H
2011-10-31
Although human experience is multisensory in nature, previous research has focused predominantly on memory for unisensory as opposed to multisensory information. In this work, we sought to investigate behavioral and neural differences between the cued recall of cross-modal audiovisual associations versus within-modal visual or auditory associations. Participants were presented with cue-target associations comprised of pairs of nonsense objects, pairs of nonsense sounds, objects paired with sounds, and sounds paired with objects. Subsequently, they were required to recall the modality of the target given the cue while behavioral accuracy, reaction time, and blood oxygenation level dependent (BOLD) activation were measured. Successful within-modal recall was associated with modality-specific reactivation in primary perceptual regions, and was more accurate than cross-modal retrieval. When auditory targets were correctly or incorrectly recalled using a cross-modal visual cue, there was re-activation in auditory association cortex, and recall of information from cross-modal associations activated the hippocampus to a greater degree than within-modal associations. Findings support theories that propose an overlap between regions active during perception and memory, and show that behavioral and neural differences exist between within- and cross-modal associations. Overall the current study highlights the importance of the role of multisensory information in memory. Copyright © 2011 Elsevier B.V. All rights reserved.
What's what in auditory cortices?
Retsa, Chrysa; Matusz, Pawel J; Schnupp, Jan W H; Murray, Micah M
2018-08-01
Distinct anatomical and functional pathways are postulated for analysing a sound's object-related ('what') and space-related ('where') information. It remains unresolved to which extent distinct or overlapping neural resources subserve specific object-related dimensions (i.e. who is speaking and what is being said can both be derived from the same acoustic input). To address this issue, we recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to their pitch, speaker identity, uttered syllable ('what' dimensions) or their location ('where'). Sound acoustics were held constant across blocks; the only manipulation involved the sound dimension that participants had to attend to. The task-relevant dimension was varied across blocks. AEPs from healthy participants were analysed within an electrical neuroimaging framework to differentiate modulations in response strength from modulations in response topography; the latter of which forcibly follow from changes in the configuration of underlying sources. There were no behavioural differences in discrimination of sounds across the 4 feature dimensions. As early as 90ms post-stimulus onset, AEP topographies differed across 'what' conditions, supporting a functional sub-segregation within the auditory 'what' pathway. This study characterises the spatio-temporal dynamics of segregated, yet parallel, processing of multiple sound object-related feature dimensions when selective attention is directed to them. Copyright © 2018 Elsevier Inc. All rights reserved.
A Jurassic gliding euharamiyidan mammal with an ear of five auditory bones
NASA Astrophysics Data System (ADS)
Han, Gang; Mao, Fangyuan; Bi, Shundong; Wang, Yuanqing; Meng, Jin
2017-11-01
Gliding is a distinctive locomotion type that has been identified in only three mammal species from the Mesozoic era. Here we describe another Jurassic glider that belongs to the euharamiyidan mammals and shows hair details on its gliding membrane that are highly similar to those of extant gliding mammals. This species possesses a five-boned auditory apparatus consisting of the stapes, incus, malleus, ectotympanic and surangular, representing, to our knowledge, the earliest known definitive mammalian middle ear. The surangular has not been previously identified in any mammalian middle ear, and the morphology of each auditory bone differs from those of known mammals and their kin. We conclude that gliding locomotion was probably common in euharamiyidans, which lends support to idea that there was a major adaptive radiation of mammals in the mid-Jurassic period. The acquisition of the auditory bones in euharamiyidans was related to the formation of the dentary-squamosal jaw joint, which allows a posterior chewing movement, and must have evolved independently from the middle ear structures of monotremes and therian mammals.
Predictive cues for auditory stream formation in humans and monkeys.
Aggelopoulos, Nikolaos C; Deike, Susann; Selezneva, Elena; Scheich, Henning; Brechmann, André; Brosch, Michael
2017-12-18
Auditory perception is improved when stimuli are predictable, and this effect is evident in a modulation of the activity of neurons in the auditory cortex as shown previously. Human listeners can better predict the presence of duration deviants embedded in stimulus streams with fixed interonset interval (isochrony) and repeated duration pattern (regularity), and neurons in the auditory cortex of macaque monkeys have stronger sustained responses in the 60-140 ms post-stimulus time window under these conditions. Subsequently, the question has arisen whether isochrony or regularity in the sensory input contributed to the enhancement of the neuronal and behavioural responses. Therefore, we varied the two factors isochrony and regularity independently and measured the ability of human subjects to detect deviants embedded in these sequences as well as measuring the responses of neurons the primary auditory cortex of macaque monkeys during presentations of the sequences. The performance of humans in detecting deviants was significantly increased by regularity. Isochrony enhanced detection only in the presence of the regularity cue. In monkeys, regularity increased the sustained component of neuronal tone responses in auditory cortex while isochrony had no consistent effect. Although both regularity and isochrony can be considered as parameters that would make a sequence of sounds more predictable, our results from the human and monkey experiments converge in that regularity has a greater influence on behavioural performance and neuronal responses. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
A framework for testing and comparing binaural models.
Dietz, Mathias; Lestang, Jean-Hugues; Majdak, Piotr; Stern, Richard M; Marquardt, Torsten; Ewert, Stephan D; Hartmann, William M; Goodman, Dan F M
2018-03-01
Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject. Copyright © 2017 Elsevier B.V. All rights reserved.
Object representation in the human auditory system
Winkler, István; van Zuijen, Titia L.; Sussman, Elyse; Horváth, János; Näätänen, Risto
2010-01-01
One important principle of object processing is exclusive allocation. Any part of the sensory input, including the border between two objects, can only belong to one object at a time. We tested whether tones forming a spectro-temporal border between two sound patterns can belong to both patterns at the same time. Sequences were composed of low-, intermediate- and high-pitched tones. Tones were delivered with short onset-to-onset intervals causing the high and low tones to automatically form separate low and high sound streams. The intermediate-pitch tones could be perceived as part of either one or the other stream, but not both streams at the same time. Thus these tones formed a pitch ’border’ between the two streams. The tones were presented in a fixed, cyclically repeating order. Linking the intermediate-pitch tones with the high or the low tones resulted in the perception of two different repeating tonal patterns. Participants were instructed to maintain perception of one of the two tone patterns throughout the stimulus sequences. Occasional changes violated either the selected or the alternative tone pattern, but not both at the same time. We found that only violations of the selected pattern elicited the mismatch negativity event-related potential, indicating that only this pattern was represented in the auditory system. This result suggests that individual sounds are processed as part of only one auditory pattern at a time. Thus tones forming a spectro-temporal border are exclusively assigned to one sound object at any given time, as are spatio-temporal borders in vision. PMID:16836636
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
2013-01-01
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
Brock, Jon; Bzishvili, Samantha; Reid, Melanie; Hautus, Michael; Johnson, Blake W
2013-11-01
Atypical auditory perception is a widely recognised but poorly understood feature of autism. In the current study, we used magnetoencephalography to measure the brain responses of 10 autistic children as they listened passively to dichotic pitch stimuli, in which an illusory tone is generated by sub-millisecond inter-aural timing differences in white noise. Relative to control stimuli that contain no inter-aural timing differences, dichotic pitch stimuli typically elicit an object related negativity (ORN) response, associated with the perceptual segregation of the tone and the carrier noise into distinct auditory objects. Autistic children failed to demonstrate an ORN, suggesting a failure of segregation; however, comparison with the ORNs of age-matched typically developing controls narrowly failed to attain significance. More striking, the autistic children demonstrated a significant differential response to the pitch stimulus, peaking at around 50 ms. This was not present in the control group, nor has it been found in other groups tested using similar stimuli. This response may be a neural signature of atypical processing of pitch in at least some autistic individuals.
Wilson, Yvette M.; Gunnersen, Jenny M.; Murphy, Mark
2015-01-01
Memory formation is thought to occur via enhanced synaptic connectivity between populations of neurons in the brain. However, it has been difficult to localize and identify the neurons that are directly involved in the formation of any specific memory. We have previously used fos-tau-lacZ (FTL) transgenic mice to identify discrete populations of neurons in amygdala and hypothalamus, which were specifically activated by fear conditioning to a context. Here we have examined neuronal activation due to fear conditioning to a more specific auditory cue. Discrete populations of learning-specific neurons were identified in only a small number of locations in the brain, including those previously found to be activated in amygdala and hypothalamus by context fear conditioning. These populations, each containing only a relatively small number of neurons, may be directly involved in fear learning and memory. PMID:26179231
Gilbert, Marcoita T; Soderstrom, Ken
2013-01-01
Cannabinoids are well-established to alter processes of sensory perception; however neurophysiological mechanisms responsible remain unclear. Arc, an immediate-early gene (IEG) product involved in dendritic spine dynamics and necessary for plasticity changes such as long-term potentiation, is rapidly induced within zebra finch caudal medial nidopallium (NCM) following novel song exposure, a response that habituates after repeated stimuli. Arc appears unique in its rapid postsynaptic dendritic expression following excitatory input. Previously, we found that vocal development-altering cannabinoid treatments are associated with elevated dendritic spine densities in motor- (HVC) and learning-related (Area X) song regions of zebra finch telencephalon. Given Arc’s dendritic morphological role, we hypothesized that cannabinoid-altered spine densities may involve Arc-related signaling. To test this, we examined the ability of the cannabinoid agonist WIN55212-2 (WIN) to: (1) acutely disrupt song-induced Arc expression; (2) interfere with habituation to auditory stimuli and; (3) alter dendritic spine densities in auditory regions. We found that WIN (3 mg/kg) acutely reduced Arc expression within both NCM and Field L2 in an antagonist-reversible manner. WIN did not alter Arc expression in thalamic auditory relay Nucleus Ovoidalis (Ov), suggesting cannabinoid signaling selectively alters responses to auditory stimulation. Novel song stimulation rapidly increased dendritic spine densities within auditory telencephalon, an effect blocked by WIN pretreatments. Taken together, cannabinoid inhibition of both Arc induction and its habituation to repeated stimuli, combined with prevention of rapid increases in dendritic spine densities, implicates cannabinoid signaling in modulation of physiological processes important to auditory responsiveness and memory. PMID:24134952
Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter
2002-12-01
Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.
Classification of underwater target echoes based on auditory perception characteristics
NASA Astrophysics Data System (ADS)
Li, Xiukun; Meng, Xiangxia; Liu, Hang; Liu, Mingye
2014-06-01
In underwater target detection, the bottom reverberation has some of the same properties as the target echo, which has a great impact on the performance. It is essential to study the difference between target echo and reverberation. In this paper, based on the unique advantage of human listening ability on objects distinction, the Gammatone filter is taken as the auditory model. In addition, time-frequency perception features and auditory spectral features are extracted for active sonar target echo and bottom reverberation separation. The features of the experimental data have good concentration characteristics in the same class and have a large amount of differences between different classes, which shows that this method can effectively distinguish between the target echo and reverberation.
ERIC Educational Resources Information Center
Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen
2017-01-01
Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…
NASA Astrophysics Data System (ADS)
Deprez, Hanne; Gransier, Robin; Hofmann, Michael; van Wieringen, Astrid; Wouters, Jan; Moonen, Marc
2018-02-01
Objective. Electrically evoked auditory steady-state responses (EASSRs) are potentially useful for objective cochlear implant (CI) fitting and follow-up of the auditory maturation in infants and children with a CI. EASSRs are recorded in the electro-encephalogram (EEG) in response to electrical stimulation with continuous pulse trains, and are distorted by significant CI artifacts related to this electrical stimulation. The aim of this study is to evaluate a CI artifacts attenuation method based on independent component analysis (ICA) for three EASSR datasets. Approach. ICA has often been used to remove CI artifacts from the EEG to record transient auditory responses, such as cortical evoked auditory potentials. Independent components (ICs) corresponding to CI artifacts are then often manually identified. In this study, an ICA based CI artifacts attenuation method was developed and evaluated for EASSR measurements with varying CI artifacts and EASSR characteristics. Artifactual ICs were automatically identified based on their spectrum. Main results. For 40 Hz amplitude modulation (AM) stimulation at comfort level, in high SNR recordings, ICA succeeded in removing CI artifacts from all recording channels, without distorting the EASSR. For lower SNR recordings, with 40 Hz AM stimulation at lower levels, or 90 Hz AM stimulation, ICA either distorted the EASSR or could not remove all CI artifacts in most subjects, except for two of the seven subjects tested with low level 40 Hz AM stimulation. Noise levels were reduced after ICA was applied, and up to 29 ICs were rejected, suggesting poor ICA separation quality. Significance. We hypothesize that ICA is capable of separating CI artifacts and EASSR in case the contralateral hemisphere is EASSR dominated. For small EASSRs or large CI artifact amplitudes, ICA separation quality is insufficient to ensure complete CI artifacts attenuation without EASSR distortion.
NASA Astrophysics Data System (ADS)
Bechara, Antoine; Tranel, Daniel; Damasio, Hanna; Adolphs, Ralph; Rockland, Charles; Damasio, Antonio R.
1995-08-01
A patient with selective bilateral damage to the amygdala did not acquire conditioned autonomic responses to visual or auditory stimuli but did acquire the declarative facts about which visual or auditory stimuli were paired with the unconditioned stimulus. By contrast, a patient with selective bilateral damage to the hippocampus failed to acquire the facts but did acquire the conditioning. Finally, a patient with bilateral damage to both amygdala and hippocampal formation acquired neither the conditioning nor the facts. These findings demonstrate a double dissociation of conditioning and declarative knowledge relative to the human amygdala and hippocampus.
Evaluation of a compact tinnitus therapy by electrophysiological tinnitus decompensation measures.
Low, Yin Fen; Argstatter, Heike; Bolay, Hans Volker; Strauss, Daniel J
2008-01-01
Large-scale neural correlates of the tinnitus decompensation have been identified by using wavelet phase stability criteria of single sweep sequences of auditory late responses (ALRs). Our previous work showed that the synchronization stability in ALR sequences might be used for objective quantification of the tinnitus decompensation and attention which link to Jastreboff tinnitus model. In this study, we intend to provide an objective evaluation for quantifying the effect of music therapy in tinnitus patients. We examined neural correlates of the attentional mechanism in single sweep sequences of ALRs in chronic tinnitus patients who underwent compact therapy course by using the maximum entropy auditory paradigm. Results by our measure showed that the extent of differentiation between attended and unattended conditions improved significantly after the therapy. It is concluded that the wavelet phase synchronization stability of ALRs single sweeps can be used for the objective evaluation of tinnitus therapies, in this case the compact tinnitus music therapy.
Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank
2017-01-01
Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.
Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank
2017-01-01
Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory. PMID:29059237
ERIC Educational Resources Information Center
Butler, Christopher W.; Wilson, Yvette M.; Gunnersen, Jenny M.; Murphy, Mark
2015-01-01
Memory formation is thought to occur via enhanced synaptic connectivity between populations of neurons in the brain. However, it has been difficult to localize and identify the neurons that are directly involved in the formation of any specific memory. We have previously used "fos-tau-lacZ" ("FTL") transgenic mice to identify…
Using Facebook to Reach People Who Experience Auditory Hallucinations.
Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror
2016-06-14
Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience auditory hallucinations. Women, people who use mobile phones, and those experiencing more distress, were reportedly more open to using Facebook as a support and/or therapeutic tool in the future. Facebook advertisements can be used to recruit research participants who experience auditory hallucinations quickly and in a cost-effective manner. Most (58%) Web-based respondents are open to Facebook-based support and treatment and are willing to describe their subjective experiences with auditory hallucinations.
Pérez-Valenzuela, Catherine; Gárate-Pérez, Macarena F.; Sotomayor-Zárate, Ramón; Delano, Paul H.; Dagnino-Subiabre, Alexies
2016-01-01
Chronic stress impairs auditory attention in rats and monoamines regulate neurotransmission in the primary auditory cortex (A1), a brain area that modulates auditory attention. In this context, we hypothesized that norepinephrine (NE) levels in A1 correlate with the auditory attention performance of chronically stressed rats. The first objective of this research was to evaluate whether chronic stress affects monoamines levels in A1. Male Sprague–Dawley rats were subjected to chronic stress (restraint stress) and monoamines levels were measured by high performance liquid chromatographer (HPLC)-electrochemical detection. Chronically stressed rats had lower levels of NE in A1 than did controls, while chronic stress did not affect serotonin (5-HT) and dopamine (DA) levels. The second aim was to determine the effects of reboxetine (a selective inhibitor of NE reuptake) on auditory attention and NE levels in A1. Rats were trained to discriminate between two tones of different frequencies in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance of ≥80% correct trials in the 2-ACT were randomly assigned to control and stress experimental groups. To analyze the effects of chronic stress on the auditory task, trained rats of both groups were subjected to 50 2-ACT trials 1 day before and 1 day after of the chronic stress period. A difference score (DS) was determined by subtracting the number of correct trials after the chronic stress protocol from those before. An unexpected result was that vehicle-treated control rats and vehicle-treated chronically stressed rats had similar performances in the attentional task, suggesting that repeated injections with vehicle were stressful for control animals and deteriorated their auditory attention. In this regard, both auditory attention and NE levels in A1 were higher in chronically stressed rats treated with reboxetine than in vehicle-treated animals. These results indicate that NE has a key role in A1 and attention of stressed rats during tone discrimination. PMID:28082872
Summary statistics in auditory perception.
McDermott, Josh H; Schemitsch, Michael; Simoncelli, Eero P
2013-04-01
Sensory signals are transduced at high resolution, but their structure must be stored in a more compact format. Here we provide evidence that the auditory system summarizes the temporal details of sounds using time-averaged statistics. We measured discrimination of 'sound textures' that were characterized by particular statistical properties, as normally result from the superposition of many acoustic features in auditory scenes. When listeners discriminated examples of different textures, performance improved with excerpt duration. In contrast, when listeners discriminated different examples of the same texture, performance declined with duration, a paradoxical result given that the information available for discrimination grows with duration. These results indicate that once these sounds are of moderate length, the brain's representation is limited to time-averaged statistics, which, for different examples of the same texture, converge to the same values with increasing duration. Such statistical representations produce good categorical discrimination, but limit the ability to discern temporal detail.
Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel
2017-04-01
Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.
Evaluation of auditory perception development in neonates by event-related potential technique.
Zhang, Qinfen; Li, Hongxin; Zheng, Aibin; Dong, Xuan; Tu, Wenjuan
2017-08-01
To investigate auditory perception development in neonates and correlate it with days after birth, left and right hemisphere development and sex using event-related potential (ERP) technique. Sixty full-term neonates, consisting of 32 males and 28 females, aged 2-28days were included in this study. An auditory oddball paradigm was used to elicit ERPs. N2 wave latencies and areas were recorded at different days after birth, to study on relationship between auditory perception and age, and comparison of left and right hemispheres, and males and females. Average wave forms of ERPs in neonates started from relatively irregular flat-bottomed troughs to relatively regular steep-sided ripples. A good linear relationship between ERPs and days after birth in neonates was observed. As days after birth increased, N2 latencies gradually and significantly shortened, and N2 areas gradually and significantly increased (both P<0.01). N2 areas in the central part of the brain were significantly greater, and N2 latencies in the central part were significantly shorter in the left hemisphere compared with the right, indicative of left hemisphere dominance (both P<0.05). N2 areas were greater and N2 latencies shorter in female neonates compared with males. The neonatal period is one of rapid auditory perception development. In the days following birth, the auditory perception ability of neonates gradually increases. This occurs predominantly in the left hemisphere, with auditory perception ability appearing to develop earlier in female neonates than in males. ERP can be used as an objective index used to evaluate auditory perception development in neonates. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
2006-12-01
Biology of Marine Mammals, San Diego, California, 12 - 16 December. Finneran, J. J. and Houser, D. S. 2004. Objective measures of steady-state...Gervais’ beaked whale auditory evoked potential hearing measurements. 16th Biennial Conference on the Biology of Marine Mammals, San Diego, California...Biennial Conference on the Biology of Marine Mammals, San Diego, California, 12 - 16 December. 16 FTR N00014-04-1-0455 BIOMIMETICA Invited Lectures
Auditory risk estimates for youth target shooting
Meinke, Deanna K.; Murphy, William J.; Finan, Donald S.; Lankford, James E.; Flamme, Gregory A.; Stewart, Michael; Soendergaard, Jacob; Jerome, Trevor W.
2015-01-01
Objective To characterize the impulse noise exposure and auditory risk for youth recreational firearm users engaged in outdoor target shooting events. The youth shooting positions are typically standing or sitting at a table, which places the firearm closer to the ground or reflective surface when compared to adult shooters. Design Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit suggested by the World Health Organization (1999) for children. Study sample Impulses were generated by 26 firearm/ammunition configurations representing rifles, shotguns, and pistols used by youth. Measurements were obtained relative to a youth shooter’s left ear. Results All firearms generated peak levels that exceeded the 120 dB peak limit suggested by the WHO for children. In general, shooting from the seated position over a tabletop increases the peak levels, LAeq8 and reduces the unprotected maximum permissible exposures (MPEs) for both rifles and pistols. Pistols pose the greatest auditory risk when fired over a tabletop. Conclusion Youth should utilize smaller caliber weapons, preferably from the standing position, and always wear hearing protection whenever engaging in shooting activities to reduce the risk for auditory damage. PMID:24564688
Lankford, James E.; Meinke, Deanna K.; Flamme, Gregory A.; Finan, Donald S.; Stewart, Michael; Tasko, Stephen; Murphy, William J.
2016-01-01
Objective To characterize the impulse noise exposure and auditory risk for air rifle users for both youth and adults. Design Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit and LAeq75 exposure limit suggested by the World Health Organization (1999) for children. Study sample Impulses were generated by 9 pellet air rifles and 1 BB air rifle. Results None of the air rifles generated peak levels that exceeded the 140 dB peak limit for adults and 8 (80%) exceeded the 120 dB peak SPL limit for youth. In general, for both adults and youth there is minimal auditory risk when shooting less than 100 unprotected shots with pellet air rifles. Air rifles with suppressors were less hazardous than those without suppressors and the pellet air rifles with higher velocities were generally more hazardous than those with lower velocities. Conclusion To minimize auditory risk, youth should utilize air rifles with an integrated suppressor and lower velocity ratings. Air rifle shooters are advised to wear hearing protection whenever engaging in shooting activities in order to gain self-efficacy and model appropriate hearing health behaviors necessary for recreational firearm use. PMID:26840923
A Brain for Speech. Evolutionary Continuity in Primate and Human Auditory-Vocal Processing
Aboitiz, Francisco
2018-01-01
In this review article, I propose a continuous evolution from the auditory-vocal apparatus and its mechanisms of neural control in non-human primates, to the peripheral organs and the neural control of human speech. Although there is an overall conservatism both in peripheral systems and in central neural circuits, a few changes were critical for the expansion of vocal plasticity and the elaboration of proto-speech in early humans. Two of the most relevant changes were the acquisition of direct cortical control of the vocal fold musculature and the consolidation of an auditory-vocal articulatory circuit, encompassing auditory areas in the temporoparietal junction and prefrontal and motor areas in the frontal cortex. This articulatory loop, also referred to as the phonological loop, enhanced vocal working memory capacity, enabling early humans to learn increasingly complex utterances. The auditory-vocal circuit became progressively coupled to multimodal systems conveying information about objects and events, which gradually led to the acquisition of modern speech. Gestural communication accompanies the development of vocal communication since very early in human evolution, and although both systems co-evolved tightly in the beginning, at some point speech became the main channel of communication. PMID:29636657
Psychophysiological responses to masked auditory stimuli.
Borgeat, F; Elie, R; Chaloult, L; Chabot, R
1985-02-01
Psychophysiological responses to masked auditory verbal stimuli of increasing intensities were studied in twenty healthy women. Two experimental sessions corresponding to two stimulation contents (neutral or emotional) were conducted. At each session, two different sets of instructions (attending or not attending to stimuli) were used successively. Verbal stimuli, masked by a 40-dB white noise, were presented to the subject at increasing intensities by increments of 5 dB starting at 0 dB. At each increment, frontal EMG, skin conductance and heart rate were recorded. The data were submitted to analyses of variance and covariance. Psychophysiological responses to stimuli below the thresholds of identification and detection were observed. The instruction not to attend the stimuli modified the patterns of physiological responses. The effect of the affective content of the stimuli on responses was stronger when not attending. The results show the possibility of psychophysiological responses to masked auditory stimuli and suggests that psychophysiological parameters can constitute objective and useful measures for research in auditory subliminal perception.
Stability of auditory discrimination and novelty processing in physiological aging.
Raggi, Alberto; Tasca, Domenica; Rundo, Francesco; Ferri, Raffaele
2013-01-01
Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs) allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN) and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.
NASA Astrophysics Data System (ADS)
Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto
2015-02-01
Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.
Prevalence of auditory changes in newborns in a teaching hospital
Guimarães, Valeriana de Castro; Barbosa, Maria Alves
2012-01-01
Summary Introduction: The precocious diagnosis and the intervention in the deafness are of basic importance in the infantile development. The loss auditory and more prevalent than other joined riots to the birth. Objective: Esteem the prevalence of auditory alterations in just-born in a hospital school. Method: Prospective transversal study that evaluated 226 just-been born, been born in a public hospital, between May of 2008 the May of 2009. Results: Of the 226 screened, 46 (20.4%) had presented absence of emissions, having been directed for the second emission. Of the 26 (56.5%) children who had appeared in the retest, 8 (30.8%) had remained with absence and had been directed to the Otolaryngologist. Five (55.5%) had appeared and had been examined by the doctor. Of these, 3 (75.0%) had presented normal otoscopy, being directed for evaluation of the Evoked Potential Auditory of Brainstem (PEATE). Of the total of studied children, 198 (87.6%) had had presence of emissions in one of the tests and, 2 (0.9%) with deafness diagnosis. Conclusion: The prevalence of auditory alterations in the studied population was of 0,9%. The study it offers given excellent epidemiologists and it presents the first report on the subject, supplying resulted preliminary future implantation and development of a program of neonatal auditory selection. PMID:25991933
Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan
2016-10-01
Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.
Misperception of exocentric directions in auditory space
Arthur, Joeanna C.; Philbeck, John W.; Sargent, Jesse; Dopkins, Stephen
2008-01-01
Previous studies have demonstrated large errors (over 30°) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer’s location; e.g., Philbeck et al., in press). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20° to 160° azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to −19° for visual targets at 160°). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space. PMID:18555205
Spine Formation and Maturation in the Developing Rat Auditory Cortex
Schachtele, Scott J.; Losh, Joe; Dailey, Michael E.; Green, Steven H.
2013-01-01
The rat auditory cortex is organized as a tonotopic map of sound frequency. This map is broadly tuned at birth and is refined during the first 3 weeks postnatal. The structural correlates underlying tonotopic map maturation and reorganization during development are poorly understood. We employed fluorescent dye ballistic labeling (“DiOlistics”) alone, or in conjunction with immunohistochemistry, to quantify synaptogenesis in the auditory cortex of normal hearing rats. We show that the developmental appearance of dendritic protrusions, which include both immature filopodia and mature spines, on layers 2/3, 4, and 5 pyramidal and layer 4 spiny nonpyramidal neurons occurs in three phases: slow addition of dendritic protrusions from postnatal day 4 (P4) to P9, rapid addition of dendritic protrusions from P9 to P19, and a final phase where mature protrusion density is achieved (>P21). Next, we combined DiOlistics with immunohistochemical labeling of bassoon, a presynaptic scaffolding protein, as a novel method to categorize dendritic protrusions as either filopodia or mature spines in cortex fixed in vivo. Using this method we observed an increase in the spine-to-filopodium ratio from P9–P16, indicating a period of rapid spine maturation. Previous studies report mature spines as being shorter in length compared to filopodia. We similarly observed a reduction in protrusion length between P9 and P16, corroborating our immunohistochemical spine maturation data. These studies show that dendritic protrusion formation and spine maturation occur rapidly at a time previously shown to correspond to auditory cortical tonotopic map refinement (P11–P14), providing a structural correlate of physiological maturation. PMID:21800311
1991-09-01
just one modality (e.g. visual or auditory agnosia ) or impaired manipulation of objects with specific uses, despite intact recognition of them (apraxia...Neurosurgery and itbiatzy, 51, 1201-1207. Farah, M. J. (1991) Patterns of co-occurence among the associative agnosias : Implications for visual object
Ross, Bernhard; Barat, Masihullah; Fujioka, Takako
2017-06-14
Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as talking and singing or playing a musical instrument. Moreover, neural oscillations at β-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (7 female, 12 male) participated in three magnetoencephalographic recordings while first passively listening to recorded sounds of a bell ringing, then actively striking the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared with the initial naive listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of β-band oscillations, as well as θ coherence between auditory and sensorimotor cortices, was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a key press. We propose that P2 characterizes familiarity with sound objects, whereas β-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning. SIGNIFICANCE STATEMENT While suppression of auditory responses to self-generated sounds is well known, it is not clear whether the learned action-sound association modifies subsequent perception. Our study demonstrated the immediate effects of sound-making experience on perception using magnetoencephalographic recordings, as reflected in the increased auditory evoked P2 wave, increased responsiveness of β oscillations, and enhanced connectivity between auditory and sensorimotor cortices. The importance of motor learning was underscored as the changes were much smaller in a control group using a key press to generate the sounds instead of learning to play the musical instrument. The results support the rapid integration of a feedforward model during perception and provide a neurophysiological basis for the application of music making in motor rehabilitation training. Copyright © 2017 the authors 0270-6474/17/375948-12$15.00/0.
Gurnsey, Kate; Salisbury, Dean; Sweet, Robert A.
2016-01-01
Auditory refractoriness refers to the finding of smaller electroencephalographic (EEG) responses to tones preceded by shorter periods of silence. To date, its physiological mechanisms remain unclear, limiting the insights gained from findings of abnormal refractoriness in individuals with schizophrenia. To resolve this roadblock, we studied auditory refractoriness in the rhesus, one of the most important animal models of auditory function, using grids of up to 32 chronically implanted cranial EEG electrodes. Four macaques passively listened to sounds whose identity and timing was random, thus preventing animals from forming valid predictions about upcoming sounds. Stimulus onset asynchrony ranged between 0.2 and 12.8 s, thus encompassing the clinically relevant timescale of refractoriness. Our results show refractoriness in all 8 previously identified middle- and long-latency components that peaked between 14 and 170 ms after tone onset. Refractoriness may reflect the formation and gradual decay of a basic sensory memory trace that may be mirrored by the expenditure and gradual recovery of a limited physiological resource that determines generator excitability. For all 8 components, results were consistent with the assumption that processing of each tone expends ∼65% of the available resource. Differences between components are caused by how quickly the resource recovers. Recovery time constants of different components ranged between 0.5 and 2 s. This work provides a solid conceptual, methodological, and computational foundation to dissect the physiological mechanisms of auditory refractoriness in the rhesus. Such knowledge may, in turn, help develop novel pharmacological, mechanism-targeted interventions. PMID:27512021
Bidelman, Gavin M
2016-10-01
Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (<100 ms) were ~2-3× shorter than nonmusicians' (~200 ms), suggesting more refined multisensory integration and audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.
Driver memory for in-vehicle visual and auditory messages
DOT National Transportation Integrated Search
1999-12-01
Three experiments were conducted in a driving simulator to evaluate effects of in-vehicle message modality and message format on comprehension and memory for younger and older drivers. Visual icons and text messages were effective in terms of high co...
Identification of a pathway for intelligible speech in the left temporal lobe
Scott, Sophie K.; Blank, C. Catrin; Rosen, Stuart; Wise, Richard J. S.
2017-01-01
Summary It has been proposed that the identification of sounds, including species-specific vocalizations, by primates depends on anterior projections from the primary auditory cortex, an auditory pathway analogous to the ventral route proposed for the visual identification of objects. We have identified a similar route in the human for understanding intelligible speech. Using PET imaging to identify separable neural subsystems within the human auditory cortex, we used a variety of speech and speech-like stimuli with equivalent acoustic complexity but varying intelligibility. We have demonstrated that the left superior temporal sulcus responds to the presence of phonetic information, but its anterior part only responds if the stimulus is also intelligible. This novel observation demonstrates a left anterior temporal pathway for speech comprehension. PMID:11099443
Effects of Long-Term Musical Training on Cortical Evoked Auditory Potentials
Brown, Carolyn J.; Jeon, Eun-Kyung; Driscoll, Virginia; Mussoi, Bruna; Deshpande, Shruti Balvalli; Gfeller, Kate; Abbas, Paul
2016-01-01
Objective Evidence suggests that musicians, as a group, have superior frequency resolution abilities when compared to non-musicians. It is possible to assess auditory discrimination using either behavioral or electrophysiologic methods. The purpose of this study was to determine if the auditory change complex (ACC) is sensitive enough to reflect the differences in spectral processing exhibited by musicians and non-musicians. Design Twenty individuals (10 musicians and 10 non-musicians) participated in this study. Pitch and spectral ripple discrimination were assessed using both behavioral and electrophysiologic methods. Behavioral measures were obtained using a standard three interval, forced choice procedure and the ACC was recorded and used as an objective (i.e. non-behavioral) measure of discrimination between two auditory signals. The same stimuli were used for both psychophysical and electrophysiologic testing. Results As a group, musicians were able to detect smaller changes in pitch than non-musicians. They also were able to detect a shift in the position of the peaks and valleys in a ripple noise stimulus at higher ripple densities than non-musicians. ACC responses recorded from musicians were larger than those recorded from non-musicians when the amplitude of the ACC response was normalized to the amplitude of the onset response in each stimulus pair. Visual detection thresholds derived from the evoked potential data were better for musicians than non-musicians regardless of whether the task was discrimination of musical pitch or detection of a change in the frequency spectrum of the rippled noise stimuli. Behavioral measures of discrimination were generally more sensitive than the electrophysiologic measures; however, the two metrics were correlated. Conclusions Perhaps as a result of extensive training, musicians are better able to discriminate spectrally complex acoustic signals than non-musicians. Those differences are evident not only in perceptual/behavioral tests, but also in electrophysiologic measures of neural response at the level of the auditory cortex. While these results are based on observations made from normal hearing listeners, they suggest that the ACC may provide a non-behavioral method of assessing auditory discrimination and as a result might prove useful in future studies that explore the efficacy of participation in a musically based, auditory training program perhaps geared toward pediatric and/or hearing-impaired listeners. PMID:28225736
Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard
2015-08-01
In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.
Ebbers, Lena; Weber, Maren; Nothwang, Hans Gerd
2017-10-26
In the mammalian superior olivary complex (SOC), synaptic inhibition contributes to the processing of binaural sound cues important for sound localization. Previous analyses demonstrated a tonotopic gradient for postsynaptic proteins mediating inhibitory neurotransmission in the lateral superior olive (LSO), a major nucleus of the SOC. To probe, whether a presynaptic molecular gradient exists as well, we investigated immunoreactivity against the vesicular inhibitory amino acid transporter (VIAAT) in the mouse auditory brainstem. Immunoreactivity against VIAAT revealed a gradient in the LSO and the superior paraolivary nucleus (SPN) of NMRI mice, with high expression in the lateral, low frequency processing limb and low expression in the medial, high frequency processing limb of both nuclei. This orientation is opposite to the previously reported gradient of glycine receptors in the LSO. Other nuclei of the SOC showed a uniform distribution of VIAAT-immunoreactivity. No gradient was observed for the glycine transporter GlyT2 and the neuronal protein NeuN. Formation of the VIAAT gradient was developmentally regulated and occurred around hearing-onset between postnatal days 8 and 16. Congenital deaf Claudin14 -/- mice bred on an NMRI background showed a uniform VIAAT-immunoreactivity in the LSO, whereas cochlear ablation in NMRI mice after hearing-onset did not affect the gradient. Additional analysis of C57Bl6/J, 129/SvJ and CBA/J mice revealed a strain-specific formation of the gradient. Our results identify an activity-regulated gradient of VIAAT in the SOC of NRMI mice. Its absence in other mouse strains adds a novel layer of strain-specific features in the auditory system, i.e. tonotopic organization of molecular gradients. This calls for caution when comparing data from different mouse strains frequently used in studies involving transgenic animals. The presence of strain-specific differences offers the possibility of genetic mapping to identify molecular factors involved in activity-dependent developmental processes in the auditory system. This would provide an important step forward concerning improved auditory rehabilitation in cases of congenital deafness.
47 CFR 14.21 - Performance Objectives.
Code of Federal Regulations, 2013 CFR
2013-10-01
... operate and use the product, including but not limited to, text, static or dynamic images, icons, labels.... (2) Connection point for external audio processing devices. Products providing auditory output shall...
47 CFR 14.21 - Performance Objectives.
Code of Federal Regulations, 2014 CFR
2014-10-01
... operate and use the product, including but not limited to, text, static or dynamic images, icons, labels.... (2) Connection point for external audio processing devices. Products providing auditory output shall...
Auditory closed-loop stimulation of the sleep slow oscillation enhances memory.
Ngo, Hong-Viet V; Martinetz, Thomas; Born, Jan; Mölle, Matthias
2013-05-08
Brain rhythms regulate information processing in different states to enable learning and memory formation. The <1 Hz sleep slow oscillation hallmarks slow-wave sleep and is critical to memory consolidation. Here we show in sleeping humans that auditory stimulation in phase with the ongoing rhythmic occurrence of slow oscillation up states profoundly enhances the slow oscillation rhythm, phase-coupled spindle activity, and, consequently, the consolidation of declarative memory. Stimulation out of phase with the ongoing slow oscillation rhythm remained ineffective. Closed-loop in-phase stimulation provides a straight-forward tool to enhance sleep rhythms and their functional efficacy. Copyright © 2013 Elsevier Inc. All rights reserved.
Gonzalez, Jose; Soma, Hirokazu; Sekine, Masashi; Yu, Wenwei
2012-06-09
Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues) have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user's mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. 10 male subjects (26+/-years old), participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF), Visual Feedback only control (VF), and Audiovisual Feedback control (AVF). For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject's EEG, ECG, electro-dermal activity (EDA), and respiration rate were measured. The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback). Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. The performance improvements when using auditory cues, along with vision (multimodal feedback), can be attributed to a reduced attentional demand during the task, which can be attributed to a visual "pop-out" or enhance effect. Also, the NASA TLX, the EEG's Alpha and Beta band, and the Heart Rate could be used to further evaluate sensory feedback systems in prosthetic applications.
Sex differences present in auditory looming perception, absent in auditory recession
NASA Astrophysics Data System (ADS)
Neuhoff, John G.; Seifritz, Erich
2005-04-01
When predicting the arrival time of an approaching sound source, listeners typically exhibit an anticipatory bias that affords a margin of safety in dealing with looming objects. The looming bias has been demonstrated behaviorally in the laboratory and in the field (Neuhoff 1998, 2001), neurally in fMRI studies (Seifritz et al., 2002), and comparatively in non-human primates (Ghazanfar, Neuhoff, and Logothetis, 2002). In the current work, male and female listeners were presented with three-dimensional looming sound sources and asked to press a button when the source was at the point of closest approach. Females exhibited a significantly greater anticipatory bias than males. Next, listeners were presented with sounds that either approached or receded and then stopped at three different terminal distances. Consistent with the time-to-arrival judgments, female terminal distance judgments for looming sources were significantly closer than male judgments. However, there was no difference between male and female terminal distance judgments for receding sounds. Taken together with the converging behavioral, neural, and comparative evidence, the current results illustrate the environmental salience of looming sounds and suggest that the anticipatory bias for auditory looming may have been shaped by evolution to provide a selective advantage in dealing with looming objects.
Cue-recruitment for extrinsic signals after training with low information stimuli.
Jain, Anshul; Fuller, Stuart; Backus, Benjamin T
2014-01-01
Cue-recruitment occurs when a previously ineffective signal comes to affect the perceptual appearance of a target object, in a manner similar to the trusted cues with which the signal was put into correlation during training. Jain, Fuller and Backus reported that extrinsic signals, those not carried by the target object itself, were not recruited even after extensive training. However, recent studies have shown that training using weakened trusted cues can facilitate recruitment of intrinsic signals. The current study was designed to examine whether extrinsic signals can be recruited by putting them in correlation with weakened trusted cues. Specifically, we tested whether an extrinsic visual signal, the rotary motion direction of an annulus of random dots, and an extrinsic auditory signal, direction of an auditory pitch glide, can be recruited as cues for the rotation direction of a Necker cube. We found learning, albeit weak, for visual but not for auditory signals. These results extend the generality of the cue-recruitment phenomenon to an extrinsic signal and provide further evidence that the visual system learns to use new signals most quickly when other, long-trusted cues are unavailable or unreliable.
Tone Series and the Nature of Working Memory Capacity Development
ERIC Educational Resources Information Center
Clark, Katherine M.; Hardman, Kyle O.; Schachtman, Todd R.; Saults, J. Scott; Glass, Bret A.; Cowan, Nelson
2018-01-01
Recent advances in understanding visual working memory, the limited information held in mind for use in ongoing processing, are extended here to examine auditory working memory development. Research with arrays of visual objects has shown how to distinguish the capacity, in terms of the "number" of objects retained, from the…
Getzmann, Stephan; Näätänen, Risto
2015-11-01
With age the ability to understand speech in multitalker environments usually deteriorates. The central auditory system has to perceptually segregate and group the acoustic input into sequences of distinct auditory objects. The present study used electrophysiological measures to study effects of age on auditory stream segregation in a multitalker scenario. Younger and older adults were presented with streams of short speech stimuli. When a single target stream was presented, the occurrence of a rare (deviant) syllable among a frequent (standard) syllable elicited the mismatch negativity (MMN), an electrophysiological correlate of automatic deviance detection. The presence of a second, concurrent stream consisting of the deviant syllable of the target stream reduced the MMN amplitude, especially when located nearby the target stream. The decrease in MMN amplitude indicates that the rare syllable of the target stream was less perceived as deviant, suggesting reduced stream segregation with decreasing stream distance. Moreover, the presence of a concurrent stream increased the MMN peak latency of the older group but not that of the younger group. The results provide neurophysiological evidence for the effects of concurrent speech on auditory processing in older adults, suggesting that older adults need more time for stream segregation in the presence of concurrent speech. Copyright © 2015 Elsevier Inc. All rights reserved.
Neurons and Objects: The Case of Auditory Cortex
Nelken, Israel; Bar-Yosef, Omer
2008-01-01
Sounds are encoded into electrical activity in the inner ear, where they are represented (roughly) as patterns of energy in narrow frequency bands. However, sounds are perceived in terms of their high-order properties. It is generally believed that this transformation is performed along the auditory hierarchy, with low-level physical cues computed at early stages of the auditory system and high-level abstract qualities at high-order cortical areas. The functional position of primary auditory cortex (A1) in this scheme is unclear – is it ‘early’, encoding physical cues, or is it ‘late’, already encoding abstract qualities? Here we argue that neurons in cat A1 show sensitivity to high-level features of sounds. In particular, these neurons may already show sensitivity to ‘auditory objects’. The evidence for this claim comes from studies in which individual sounds are presented singly and in mixtures. Many neurons in cat A1 respond to mixtures in the same way they respond to one of the individual components of the mixture, and in many cases neurons may respond to a low-level component of the mixture rather than to the acoustically dominant one, even though the same neurons respond to the acoustically-dominant component when presented alone. PMID:18982113
P50 suppression in children with selective mutism: a preliminary report.
Henkin, Yael; Feinholz, Maya; Arie, Miri; Bar-Haim, Yair
2010-01-01
Evidence suggests that children with selective mutism (SM) display significant aberrations in auditory efferent activity at the brainstem level that may underlie inefficient auditory processing during vocalization, and lead to speech avoidance. The objective of the present study was to explore auditory filtering processes at the cortical level in children with SM. The classic paired-click paradigm was utilized to assess suppression of the P50 event-related potential to the second, of two sequentially-presented clicks, in ten children with SM and 10 control children. A significant suppression of P50 to the second click was evident in the SM group, whereas no suppression effect was observed in controls. Suppression was evident in 90% of the SM group and in 40% of controls, whereas augmentation was found in 10% and 60%, respectively, yielding a significant association between group and suppression of P50. P50 to the first click was comparable in children with SM and controls. The adult-like, mature P50 suppression effect exhibited by children with SM may reflect a cortical mechanism of compensatory inhibition of irrelevant repetitive information that was not properly suppressed at lower levels of their auditory system. The current data extends our previous findings suggesting that differential auditory processing may be involved in speech selectivity in SM.
The Influence of Selective and Divided Attention on Audiovisual Integration in Children.
Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong
2016-01-24
This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.
Sullivan, Jessica R.; Thibodeau, Linda M.; Assmann, Peter F.
2013-01-01
Previous studies have indicated that individuals with normal hearing (NH) experience a perceptual advantage for speech recognition in interrupted noise compared to continuous noise. In contrast, adults with hearing impairment (HI) and younger children with NH receive a minimal benefit. The objective of this investigation was to assess whether auditory training in interrupted noise would improve speech recognition in noise for children with HI and perhaps enhance their utilization of glimpsing skills. A partially-repeated measures design was used to evaluate the effectiveness of seven 1-h sessions of auditory training in interrupted and continuous noise. Speech recognition scores in interrupted and continuous noise were obtained from pre-, post-, and 3 months post-training from 24 children with moderate-to-severe hearing loss. Children who participated in auditory training in interrupted noise demonstrated a significantly greater improvement in speech recognition compared to those who trained in continuous noise. Those who trained in interrupted noise demonstrated similar improvements in both noise conditions while those who trained in continuous noise only showed modest improvements in the interrupted noise condition. This study presents direct evidence that auditory training in interrupted noise can be beneficial in improving speech recognition in noise for children with HI. PMID:23297921
Relation between measures of speech-in-noise performance and measures of efferent activity
NASA Astrophysics Data System (ADS)
Smith, Brad; Harkrider, Ashley; Burchfield, Samuel; Nabelek, Anna
2003-04-01
Individual differences in auditory perceptual abilities in noise are well documented but the factors causing such variability are unclear. The purpose of this study was to determine if individual differences in responses measured from the auditory efferent system were correlated to individual variations in speech-in-noise performance. The relation between behavioral performance on three speech-in-noise tasks and two objective measures of the efferent auditory system were examined in thirty normal-hearing, young adults. Two of the speech-in-noise tasks measured an acceptable noise level, the maximum level of speech-babble noise that a subject is willing to accept while listening to a story. For these, the acceptable noise level was evaluated using both an ipsilateral (story and noise in same ear) and a contralateral (story and noise in opposite ears) paradigm. The third speech-in-noise task evaluated speech recognition using monosyllabic words presented in competing speech babble. Auditory efferent activity was assessed by examining the resulting suppression of click-evoked otoacoustic emissions following the introduction of a contralateral, broad-band stimulus and the activity of the ipsilateral and contralateral acoustic reflex arc was evaluated using tones and broad-band noise. Results will be discussed relative to current theories of speech in noise performance and auditory inhibitory processes.
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation
Oliva, Aude
2017-01-01
Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630
Intracranial mapping of auditory perception: Event-related responses and electrocortical stimulation
Sinai, A.; Crone, N.E.; Wied, H.M.; Franaszczuk, P.J.; Miglioretti, D.; Boatman-Reich, D.
2010-01-01
Objective We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Methods Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. Results ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60 Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Conclusions Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. Significance These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping. PMID:19070540
Behavioural benefits of multisensory processing in ferrets.
Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R
2017-01-01
Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Auditory steady-state response in cochlear implant patients.
Torres-Fortuny, Alejandro; Arnaiz-Marquez, Isabel; Hernández-Pérez, Heivet; Eimil-Suárez, Eduardo
2018-03-19
Auditory steady state responses to continuous amplitude modulated tones at rates between 70 and 110Hz, have been proposed as a feasible alternative to objective frequency specific audiometry in cochlear implant subjects. The aim of the present study is to obtain physiological thresholds by means of auditory steady-state response in cochlear implant patients (Clarion HiRes 90K), with acoustic stimulation, on free field conditions and to verify its biological origin. 11 subjects comprised the sample. Four amplitude modulated tones of 500, 1000, 2000 and 4000Hz were used as stimuli, using the multiple frequency technique. The recording of auditory steady-state response was also recorded at 0dB HL of intensity, non-specific stimulus and using a masking technique. The study enabled the electrophysiological thresholds to be obtained for each subject of the explored sample. There were no auditory steady-state responses at either 0dB or non-specific stimulus recordings. It was possible to obtain the masking thresholds. A difference was identified between behavioral and electrophysiological thresholds of -6±16, -2±13, 0±22 and -8±18dB at frequencies of 500, 1000, 2000 and 4000Hz respectively. The auditory steady state response seems to be a suitable technique to evaluate the hearing threshold in cochlear implant subjects. Copyright © 2018 Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. Publicado por Elsevier España, S.L.U. All rights reserved.
Audio-vocal system regulation in children with autism spectrum disorders.
Russo, Nicole; Larson, Charles; Kraus, Nina
2008-06-01
Do children with autism spectrum disorders (ASD) respond similarly to perturbations in auditory feedback as typically developing (TD) children? Presentation of pitch-shifted voice auditory feedback to vocalizing participants reveals a close coupling between the processing of auditory feedback and vocal motor control. This paradigm was used to test the hypothesis that abnormalities in the audio-vocal system would negatively impact ASD compensatory responses to perturbed auditory feedback. Voice fundamental frequency (F(0)) was measured while children produced an /a/ sound into a microphone. The voice signal was fed back to the subjects in real time through headphones. During production, the feedback was pitch shifted (-100 cents, 200 ms) at random intervals for 80 trials. Averaged voice F(0) responses to pitch-shifted stimuli were calculated and correlated with both mental and language abilities as tested via standardized tests. A subset of children with ASD produced larger responses to perturbed auditory feedback than TD children, while the other children with ASD produced significantly lower response magnitudes. Furthermore, robust relationships between language ability, response magnitude and time of peak magnitude were identified. Because auditory feedback helps to stabilize voice F(0) (a major acoustic cue of prosody) and individuals with ASD have problems with prosody, this study identified potential mechanisms of dysfunction in the audio-vocal system for voice pitch regulation in some children with ASD. Objectively quantifying this deficit may inform both the assessment of a subgroup of ASD children with prosody deficits, as well as remediation strategies that incorporate pitch training.
Sight and sound converge to form modality-invariant representations in temporo-parietal cortex
Man, Kingson; Kaplan, Jonas T.; Damasio, Antonio; Meyer, Kaspar
2013-01-01
People can identify objects in the environment with remarkable accuracy, irrespective of the sensory modality they use to perceive them. This suggests that information from different sensory channels converges somewhere in the brain to form modality-invariant representations, i.e., representations that reflect an object independently of the modality through which it has been apprehended. In this functional magnetic resonance imaging study of human subjects, we first identified brain areas that responded to both visual and auditory stimuli and then used crossmodal multivariate pattern analysis to evaluate the neural representations in these regions for content-specificity (i.e., do different objects evoke different representations?) and modality-invariance (i.e., do the sight and the sound of the same object evoke a similar representation?). While several areas became activated in response to both auditory and visual stimulation, only the neural patterns recorded in a region around the posterior part of the superior temporal sulcus displayed both content-specificity and modality-invariance. This region thus appears to play an important role in our ability to recognize objects in our surroundings through multiple sensory channels and to process them at a supra-modal (i.e., conceptual) level. PMID:23175818
Butler, Christopher W; Wilson, Yvette M; Gunnersen, Jenny M; Murphy, Mark
2015-08-01
Memory formation is thought to occur via enhanced synaptic connectivity between populations of neurons in the brain. However, it has been difficult to localize and identify the neurons that are directly involved in the formation of any specific memory. We have previously used fos-tau-lacZ (FTL) transgenic mice to identify discrete populations of neurons in amygdala and hypothalamus, which were specifically activated by fear conditioning to a context. Here we have examined neuronal activation due to fear conditioning to a more specific auditory cue. Discrete populations of learning-specific neurons were identified in only a small number of locations in the brain, including those previously found to be activated in amygdala and hypothalamus by context fear conditioning. These populations, each containing only a relatively small number of neurons, may be directly involved in fear learning and memory. © 2015 Butler et al.; Published by Cold Spring Harbor Laboratory Press.
Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann
2011-01-01
During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. Journal of Cognitive Neuroscience, 21, 259-274, 2009]. Using functional magnetic resonance imaging, the present follow-up study aims to further elucidate the topographic distribution of visual-phonological operations and audiovisual (AV) interactions during speech perception. Ambiguous acoustic syllables--disambiguated to /pa/ or /ta/ by the visual channel (speaking face)--served as test materials, concomitant with various control conditions (nonspeech AV signals, visual-only and acoustic-only speech, and nonspeech stimuli). (i) Visual speech yielded an AV-subadditive activation of primary auditory cortex and the anterior superior temporal gyrus (STG), whereas the posterior STG responded both to speech and nonspeech motion. (ii) The inferior frontal and the fusiform gyrus of the right hemisphere showed a strong phonetic/phonological impact (differential effects of visual /pa/ vs. /ta/) upon hemodynamic activation during presentation of speaking faces. Taken together with the previous MEG data, these results point at a dual-pathway model of visual speech information processing: On the one hand, access to the auditory system via the anterior supratemporal “what" path may give rise to direct activation of "auditory objects." On the other hand, visual speech information seems to be represented in a right-hemisphere visual working memory, providing a potential basis for later interactions with auditory information such as the McGurk effect.
Nawroth, Christian; von Borell, Eberhard
2015-05-01
Recently, foraging strategies have been linked to the ability to use indirect visual information. More selective feeders should express a higher aversion against losses compared to non-selective feeders and should therefore be more prone to avoid empty food locations. To extend these findings, in this study, we present a series of studies investigating the use of direct and indirect visual and auditory information by an omnivorous but selective feeder-the domestic pig. Subjects had to choose between two buckets, with only one containing a reward. Before making a choice, the subjects in Experiment 1 (N = 8) received full information regarding both the baited and non-baited location, either in a visual or auditory domain. In this experiment, the subjects were able to use visual but not auditory cues to infer the location of the reward spontaneously. Additionally, four individuals learned to use auditory cues after a period of training. In Experiment 2 (N = 8), the pigs were given different amounts of visual information about the content of the buckets-lifting either both of the buckets (full information), the baited bucket (direct information), the empty bucket (indirect information) or no bucket at all (no information). The subjects as a group were able to use direct and indirect visual cues. However, over the course of the experiment, the performance dropped to chance level when indirect information was provided. A final experiment (N = 3) provided preliminary results for pigs' use of indirect auditory information to infer the location of a reward. We conclude that pigs at a very young age are able to make decisions based on indirect information in the visual domain, whereas their performance in the use of indirect auditory information warrants further investigation.
Malformation of the eighth cranial nerve in children.
de Paula-Vernetta, Carlos; Muñoz-Fernández, Noelia; Mas-Estellés, Fernando; Guzmán-Calvete, Abel; Cavallé-Garrido, Laura; Morera-Pérez, Constantino
2016-01-01
Prevalence of congenital sensorineural hearing loss (SNHL) is approximately 1.5-6 in every 1,000 newborns. Dysfunction of the auditory nerve (auditory neuropathy) may be involved in up to 1%-10% of cases; hearing losses because of vestibulocochlear nerve (VCN) aplasia are less frequent. The objectives of this study were to describe clinical manifestations, hearing thresholds and aetiology of children with SNHL and VCN aplasia. We present 34 children (mean age 20 months) with auditory nerve malformation and profound HL taken from a sample of 385 children implanted in a 10-year period. We studied demographic characteristics, hearing, genetics, risk factors and associated malformations (Casselman's and Sennaroglu's classifications). Data were processed using a bivariate descriptive statistical analysis (P<.05). Of all the cases, 58.8% were bilateral (IIa/IIa and I/I were the most common). Of the unilateral cases, IIb was the most frequent. Auditory screening showed a sensitivity of 77.4%. A relationship among bilateral cases and systemic pathology was observed. We found a statistically significant difference when comparing hearing loss impairment and patients with different types of aplasia as defined by Casselman's classification. Computed tomography (CT) scan yielded a sensitivity of 46.3% and a specificity of 85.7%. However, magnetic resonance imaging (MRI) was the most sensitive imaging test. Ten percent of the children in a cochlear implant study had aplasia or hypoplasia of the auditory nerve. The degree of auditory loss was directly related to the different types of aplasia (Casselman's classification) Although CT scan and MRI are complementary, the MRI is the test of choice for detecting auditory nerve malformation. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.
da Silva, Sheila Ap. F.; Guida, Heraldo L.; dos Santos Antonio, Ana Marcia; de Abreu, Luiz Carlos; Monteiro, Carlos B. M.; Ferreira, Celso; Ribeiro, Vivian F.; Barnabe, Viviani; Silva, Sidney B.; Fonseca, Fernando L. A.; Adami, Fernando; Petenusso, Marcio; Raimundo, Rodrigo D.; Valenti, Vitor E.
2014-01-01
Background: No clear evidence is available in the literature regarding the acute effect of different styles of music on cardiac autonomic control. Objectives: The present study aimed to evaluate the acute effects of classical baroque and heavy metal musical auditory stimulation on Heart Rate Variability (HRV) in healthy men. Patients and Methods: In this study, HRV was analyzed regarding time (SDNN, RMSSD, NN50, and pNN50) and frequency domain (LF, HF, and LF / HF) in 12 healthy men. HRV was recorded at seated rest for 10 minutes. Subsequently, the participants were exposed to classical baroque or heavy metal music for five minutes through an earphone at seated rest. After exposure to the first song, they remained at rest for five minutes and they were again exposed to classical baroque or heavy metal music. The music sequence was random for each individual. Standard statistical methods were used for calculation of means and standard deviations. Besides, ANOVA and Friedman test were used for parametric and non-parametric distributions, respectively. Results: While listening to heavy metal music, SDNN was reduced compared to the baseline (P = 0.023). In addition, the LF index (ms2 and nu) was reduced during exposure to both heavy metal and classical baroque musical auditory stimulation compared to the control condition (P = 0.010 and P = 0.048, respectively). However, the HF index (ms2) was reduced only during auditory stimulation with music heavy metal (P = 0.01). The LF/HF ratio on the other hand decreased during auditory stimulation with classical baroque music (P = 0.019). Conclusions: Acute auditory stimulation with the selected heavy metal musical auditory stimulation decreased the sympathetic and parasympathetic modulation on the heart, while exposure to a selected classical baroque music reduced sympathetic regulation on the heart. PMID:25177673
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Cognitive fatigue in patients with myasthenia gravis.
Jordan, Berit; Schweden, Tabea L K; Mehl, Theresa; Menge, Uwe; Zierz, Stephan
2017-09-01
Cognitive fatigue has frequently been reported in myasthenia gravis (MG). However, objective assessment of cognitive fatigability has never been evaluated. Thirty-three MG patients with stable generalized disease and 17 healthy controls underwent a test battery including repeated testing of attention and concentration (d2-R) and Paced Auditory Serial Addition Test. Fatigability was based on calculation of linear trend (LT) reflecting dynamic performance within subsequent constant time intervals. Additionally, fatigue questionnaires were used. MG patients showed a negative LT in second d2-R testing, indicating cognitive fatigability. This finding significantly differed from stable cognitive performance in controls (P < 0.05). Results of Paced Auditory Serial Addition Test testing did not differ between groups. Self-assessed fatigue was significantly higher in MG patients compared with controls (P < 0.001), but did not correlate with LT. LT quantifies cognitive fatigability as an objective measurement of performance decline in MG patients. Self-assessed cognitive fatigue is not correlated with objective findings. Muscle Nerve 56: 449-457, 2017. © 2016 Wiley Periodicals, Inc.
Eukel, Heidi N.; Frenzel, Jeanne E.; Werremeyer, Amy; McDaniel, Becky
2016-01-01
Objective. To increase student pharmacist empathy through the use of an auditory hallucination simulation. Design. Third-year professional pharmacy students independently completed seven stations requiring skills such as communication, following directions, reading comprehension, and cognition while listening to an audio recording simulating what one experiencing auditory hallucinations may hear. Following the simulation, students participated in a faculty-led debriefing and completed a written reflection. Assessment. The Kiersma-Chen Empathy Scale was completed by each student before and after the simulation to measure changes in empathy. The written reflections were read and qualitatively analyzed. Empathy scores increased significantly after the simulation. Qualitative analysis showed students most frequently reported feeling distracted and frustrated. All student participants recommended the simulation be offered to other student pharmacists, and 99% felt the simulation would impact their future careers. Conclusions. With approximately 10 million adult Americans suffering from serious mental illness, it is important for pharmacy educators to prepare students to provide adequate patient care to this population. This auditory hallucination simulation increased student pharmacist empathy for patients with mental illness. PMID:27899838
Subcortical encoding of sound is enhanced in bilinguals and relates to executive function advantages
Krizman, Jennifer; Marian, Viorica; Shook, Anthony; Skoe, Erika; Kraus, Nina
2012-01-01
Bilingualism profoundly affects the brain, yielding functional and structural changes in cortical regions dedicated to language processing and executive function [Crinion J, et al. (2006) Science 312:1537–1540; Kim KHS, et al. (1997) Nature 388:171–174]. Comparatively, musical training, another type of sensory enrichment, translates to expertise in cognitive processing and refined biological processing of sound in both cortical and subcortical structures. Therefore, we asked whether bilingualism can also promote experience-dependent plasticity in subcortical auditory processing. We found that adolescent bilinguals, listening to the speech syllable [da], encoded the stimulus more robustly than age-matched monolinguals. Specifically, bilinguals showed enhanced encoding of the fundamental frequency, a feature known to underlie pitch perception and grouping of auditory objects. This enhancement was associated with executive function advantages. Thus, through experience-related tuning of attention, the bilingual auditory system becomes highly efficient in automatically processing sound. This study provides biological evidence for system-wide neural plasticity in auditory experts that facilitates a tight coupling of sensory and cognitive functions. PMID:22547804
Attias, Joseph; Greenstein, Tally; Peled, Miriam; Ulanovski, David; Wohlgelernter, Jay; Raveh, Eyal
The aim of the study was to compare auditory and speech outcomes and electrical parameters on average 8 years after cochlear implantation between children with isolated auditory neuropathy (AN) and children with sensorineural hearing loss (SNHL). The study was conducted at a tertiary, university-affiliated pediatric medical center. The cohort included 16 patients with isolated AN with current age of 5 to 12.2 years who had been using a cochlear implant for at least 3.4 years and 16 control patients with SNHL matched for duration of deafness, age at implantation, type of implant, and unilateral/bilateral implant placement. All participants had had extensive auditory rehabilitation before and after implantation, including the use of conventional hearing aids. Most patients received Cochlear Nucleus devices, and the remainder either Med-El or Advanced Bionics devices. Unaided pure-tone audiograms were evaluated before and after implantation. Implantation outcomes were assessed by auditory and speech recognition tests in quiet and in noise. Data were also collected on the educational setting at 1 year after implantation and at school age. The electrical stimulation measures were evaluated only in the Cochlear Nucleus implant recipients in the two groups. Similar mapping and electrical measurement techniques were used in the two groups. Electrical thresholds, comfortable level, dynamic range, and objective neural response telemetry threshold were measured across the 22-electrode array in each patient. Main outcome measures were between-group differences in the following parameters: (1) Auditory and speech tests. (2) Residual hearing. (3) Electrical stimulation parameters. (4) Correlations of residual hearing at low frequencies with electrical thresholds at the basal, middle, and apical electrodes. The children with isolated AN performed equally well to the children with SNHL on auditory and speech recognition tests in both quiet and noise. More children in the AN group than the SNHL group were attending mainstream educational settings at school age, but the difference was not statistically significant. Significant between-group differences were noted in electrical measurements: the AN group was characterized by a lower current charge to reach subjective electrical thresholds, lower comfortable level and dynamic range, and lower telemetric neural response threshold. Based on pure-tone audiograms, the children with AN also had more residual hearing before and after implantation. Highly positive coefficients were found on correlation analysis between T levels across the basal and midcochlear electrodes and low-frequency acoustic thresholds. Prelingual children with isolated AN who fail to show expected oral and auditory progress after extensive rehabilitation with conventional hearing aids should be considered for cochlear implantation. Children with isolated AN had similar pattern as children with SNHL on auditory performance tests after cochlear implantation. The lower current charge required to evoke subjective and objective electrical thresholds in children with AN compared with children with SNHL may be attributed to the contribution to electrophonic hearing from the remaining neurons and hair cells. In addition, it is also possible that mechanical stimulation of the basilar membrane, as in acoustic stimulation, is added to the electrical stimulation of the cochlear implant.
Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc
2014-01-01
One classical argument in favor of a functional role of the motor system in speech perception comes from the close-shadowing task in which a subject has to identify and to repeat as quickly as possible an auditory speech stimulus. The fact that close-shadowing can occur very rapidly and much faster than manual identification of the speech target is taken to suggest that perceptually induced speech representations are already shaped in a motor-compatible format. Another argument is provided by audiovisual interactions often interpreted as referring to a multisensory-motor framework. In this study, we attempted to combine these two paradigms by testing whether the visual modality could speed motor response in a close-shadowing task. To this aim, both oral and manual responses were evaluated during the perception of auditory and audiovisual speech stimuli, clear or embedded in white noise. Overall, oral responses were faster than manual ones, but it also appeared that they were less accurate in noise, which suggests that motor representations evoked by the speech input could be rough at a first processing stage. In the presence of acoustic noise, the audiovisual modality led to both faster and more accurate responses than the auditory modality. No interaction was however, observed between modality and response. Altogether, these results are interpreted within a two-stage sensory-motor framework, in which the auditory and visual streams are integrated together and with internally generated motor representations before a final decision may be available. PMID:25009512
Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Goebel, Rainer; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia
2014-01-01
Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 and 7 Tesla) functional magnetic resonance imaging (fMRI) with computational modeling to reveal how natural sounds are represented in the human brain. We compare competing models of sound representations and select the model that most accurately predicts fMRI response patterns to natural sounds. Our results show that the cortical encoding of natural sounds entails the formation of multiple representations of sound spectrograms with different degrees of spectral and temporal resolution. The cortex derives these multi-resolution representations through frequency-specific neural processing channels and through the combined analysis of the spectral and temporal modulations in the spectrogram. Furthermore, our findings suggest that a spectral-temporal resolution trade-off may govern the modulation tuning of neuronal populations throughout the auditory cortex. Specifically, our fMRI results suggest that neuronal populations in posterior/dorsal auditory regions preferably encode coarse spectral information with high temporal precision. Vice-versa, neuronal populations in anterior/ventral auditory regions preferably encode fine-grained spectral information with low temporal precision. We propose that such a multi-resolution analysis may be crucially relevant for flexible and behaviorally-relevant sound processing and may constitute one of the computational underpinnings of functional specialization in auditory cortex. PMID:24391486
Audiovisual integration facilitates monkeys' short-term memory.
Bigelow, James; Poremba, Amy
2016-07-01
Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.
Biologically inspired computation and learning in Sensorimotor Systems
NASA Astrophysics Data System (ADS)
Lee, Daniel D.; Seung, H. S.
2001-11-01
Networking systems presently lack the ability to intelligently process the rich multimedia content of the data traffic they carry. Endowing artificial systems with the ability to adapt to changing conditions requires algorithms that can rapidly learn from examples. We demonstrate the application of such learning algorithms on an inexpensive quadruped robot constructed to perform simple sensorimotor tasks. The robot learns to track a particular object by discovering the salient visual and auditory cues unique to that object. The system uses a convolutional neural network that automatically combines color, luminance, motion, and auditory information. The weights of the networks are adjusted using feedback from a teacher to reflect the reliability of the various input channels in the surrounding environment. Additionally, the robot is able to compensate for its own motion by adapting the parameters of a vestibular ocular reflex system.
A biologically plausible computational model for auditory object recognition.
Larson, Eric; Billimoria, Cyrus P; Sen, Kamal
2009-01-01
Object recognition is a task of fundamental importance for sensory systems. Although this problem has been intensively investigated in the visual system, relatively little is known about the recognition of complex auditory objects. Recent work has shown that spike trains from individual sensory neurons can be used to discriminate between and recognize stimuli. Multiple groups have developed spike similarity or dissimilarity metrics to quantify the differences between spike trains. Using a nearest-neighbor approach the spike similarity metrics can be used to classify the stimuli into groups used to evoke the spike trains. The nearest prototype spike train to the tested spike train can then be used to identify the stimulus. However, how biological circuits might perform such computations remains unclear. Elucidating this question would facilitate the experimental search for such circuits in biological systems, as well as the design of artificial circuits that can perform such computations. Here we present a biologically plausible model for discrimination inspired by a spike distance metric using a network of integrate-and-fire model neurons coupled to a decision network. We then apply this model to the birdsong system in the context of song discrimination and recognition. We show that the model circuit is effective at recognizing individual songs, based on experimental input data from field L, the avian primary auditory cortex analog. We also compare the performance and robustness of this model to two alternative models of song discrimination: a model based on coincidence detection and a model based on firing rate.
The Effects of Repeated Low-Level Blast Exposure on Hearing in Marines
Kubli, Lina R.; Pinto, Robin L.; Burrows, Holly L.; Littlefield, Philip D.; Brungart, Douglas S.
2017-01-01
Background: The study evaluates a group of Military Service Members specialized in blast explosive training called “Breachers” who are routinely exposed to multiple low-level blasts while teaching breaching at the U.S. Marine Corps in Quantico Virginia. The objective of this study was to determine if there are any acute or long-term auditory changes due to repeated low-level blast exposures used in training. The performance of the instructor group “Breachers” was compared to a control group, “Engineers”. Methods: A total of 11 Breachers and four engineers were evaluated in the study. The participants received comprehensive auditory tests, including pure-tone testing, speech-in-noise (SIN) measures, and central auditory behavioral and objective tests using early and late (P300) auditory evoked potentials over a period of 17 months. They also received shorter assessments immediately following the blast-exposure onsite at Quantico. Results: No acute or longitudinal effects were identified. However, there were some interesting baseline effects found in both groups. Contrary to the expected, the onsite hearing thresholds and distortion product otoacoustic emissions were slightly better at a few frequencies immediately after blast-exposure than measurements obtained with the same equipment weeks to months after each blast-exposure. Conclusions: To date, the current study is the most comprehensive study that evaluates the long-term effects of blast-exposure on hearing. Despite extensive testing to assess changes, the findings of this study suggest that the levels of current exposures used in this military training environment do not seem to have an obvious deleterious effect on hearing. PMID:28937017
Bieszczad, Kasia M; Bechay, Kiro; Rusche, James R; Jacques, Vincent; Kudugunti, Shashi; Miao, Wenyan; Weinberger, Norman M; McGaugh, James L; Wood, Marcelo A
2015-09-23
Research over the past decade indicates a novel role for epigenetic mechanisms in memory formation. Of particular interest is chromatin modification by histone deacetylases (HDACs), which, in general, negatively regulate transcription. HDAC deletion or inhibition facilitates transcription during memory consolidation and enhances long-lasting forms of synaptic plasticity and long-term memory. A key open question remains: How does blocking HDAC activity lead to memory enhancements? To address this question, we tested whether a normal function of HDACs is to gate information processing during memory formation. We used a class I HDAC inhibitor, RGFP966 (C21H19FN4O), to test the role of HDAC inhibition for information processing in an auditory memory model of learning-induced cortical plasticity. HDAC inhibition may act beyond memory enhancement per se to instead regulate information in ways that lead to encoding more vivid sensory details into memory. Indeed, we found that RGFP966 controls memory induction for acoustic details of sound-to-reward learning. Rats treated with RGFP966 while learning to associate sound with reward had stronger memory and additional information encoded into memory for highly specific features of sounds associated with reward. Moreover, behavioral effects occurred with unusually specific plasticity in primary auditory cortex (A1). Class I HDAC inhibition appears to engage A1 plasticity that enables additional acoustic features to become encoded in memory. Thus, epigenetic mechanisms act to regulate sensory cortical plasticity, which offers an information processing mechanism for gating what and how much is encoded to produce exceptionally persistent and vivid memories. Significance statement: Here we provide evidence of an epigenetic mechanism for information processing. The study reveals that a class I HDAC inhibitor (Malvaez et al., 2013; Rumbaugh et al., 2015; RGFP966, chemical formula C21H19FN4O) alters the formation of auditory memory by enabling more acoustic information to become encoded into memory. Moreover, RGFP966 appears to affect cortical plasticity: the primary auditory cortex reorganized in a manner that was unusually "tuned-in" to the specific sound cues and acoustic features that were related to reward and subsequently remembered. We propose that HDACs control "informational capture" at a systems level for what and how much information is encoded by gating sensory cortical plasticity that underlies the sensory richness of newly formed memories. Copyright © 2015 the authors 0270-6474/15/3513125-09$15.00/0.
Bechay, Kiro; Rusche, James R.; Jacques, Vincent; Kudugunti, Shashi; Miao, Wenyan; Weinberger, Norman M.; McGaugh, James L.
2015-01-01
Research over the past decade indicates a novel role for epigenetic mechanisms in memory formation. Of particular interest is chromatin modification by histone deacetylases (HDACs), which, in general, negatively regulate transcription. HDAC deletion or inhibition facilitates transcription during memory consolidation and enhances long-lasting forms of synaptic plasticity and long-term memory. A key open question remains: How does blocking HDAC activity lead to memory enhancements? To address this question, we tested whether a normal function of HDACs is to gate information processing during memory formation. We used a class I HDAC inhibitor, RGFP966 (C21H19FN4O), to test the role of HDAC inhibition for information processing in an auditory memory model of learning-induced cortical plasticity. HDAC inhibition may act beyond memory enhancement per se to instead regulate information in ways that lead to encoding more vivid sensory details into memory. Indeed, we found that RGFP966 controls memory induction for acoustic details of sound-to-reward learning. Rats treated with RGFP966 while learning to associate sound with reward had stronger memory and additional information encoded into memory for highly specific features of sounds associated with reward. Moreover, behavioral effects occurred with unusually specific plasticity in primary auditory cortex (A1). Class I HDAC inhibition appears to engage A1 plasticity that enables additional acoustic features to become encoded in memory. Thus, epigenetic mechanisms act to regulate sensory cortical plasticity, which offers an information processing mechanism for gating what and how much is encoded to produce exceptionally persistent and vivid memories. SIGNIFICANCE STATEMENT Here we provide evidence of an epigenetic mechanism for information processing. The study reveals that a class I HDAC inhibitor (Malvaez et al., 2013; Rumbaugh et al., 2015; RGFP966, chemical formula C21H19FN4O) alters the formation of auditory memory by enabling more acoustic information to become encoded into memory. Moreover, RGFP966 appears to affect cortical plasticity: the primary auditory cortex reorganized in a manner that was unusually “tuned-in” to the specific sound cues and acoustic features that were related to reward and subsequently remembered. We propose that HDACs control “informational capture” at a systems level for what and how much information is encoded by gating sensory cortical plasticity that underlies the sensory richness of newly formed memories. PMID:26400942
Dehmel, Susanne; Eisinger, Daniel; Shore, Susan E.
2012-01-01
Tinnitus or ringing of the ears is a subjective phantom sensation necessitating behavioral models that objectively demonstrate the existence and quality of the tinnitus sensation. The gap detection test uses the acoustic startle response elicited by loud noise pulses and its gating or suppression by preceding sub-startling prepulses. Gaps in noise bands serve as prepulses, assuming that ongoing tinnitus masks the gap and results in impaired gap detection. This test has shown its reliability in rats, mice, and gerbils. No data exists for the guinea pig so far, although gap detection is similar across mammals and the acoustic startle response is a well-established tool in guinea pig studies of psychiatric disorders and in pharmacological studies. Here we investigated the startle behavior and prepulse inhibition (PPI) of the guinea pig and showed that guinea pigs have a reliable startle response that can be suppressed by 15 ms gaps embedded in narrow noise bands preceding the startle noise pulse. After recovery of auditory brainstem response (ABR) thresholds from a unilateral noise over-exposure centered at 7 kHz, guinea pigs showed diminished gap-induced reduction of the startle response in frequency bands between 8 and 18 kHz. This suggests the development of tinnitus in frequency regions that showed a temporary threshold shift (TTS) after noise over-exposure. Changes in discharge rate and synchrony, two neuronal correlates of tinnitus, should be reflected in altered ABR waveforms, which would be useful to objectively detect tinnitus and its localization to auditory brainstem structures. Therefore, we analyzed latencies and amplitudes of the first five ABR waves at suprathreshold sound intensities and correlated ABR abnormalities with the results of the behavioral tinnitus testing. Early ABR wave amplitudes up to N3 were increased for animals with tinnitus possibly stemming from hyperactivity and hypersynchrony underlying the tinnitus percept. Animals that did not develop tinnitus after noise exposure showed the opposite effect, a decrease in wave amplitudes for the later waves P4–P5. Changes in latencies were only observed in tinnitus animals, which showed increased latencies. Thus, tinnitus-induced changes in the discharge activity of the auditory nerve and central auditory nuclei are represented in the ABR. PMID:22666193
Bharadwaj, Sneha V; Maricle, Denise; Green, Laura; Allman, Tamby
2015-10-01
The objective of the study was to examine short-term memory and working memory through both visual and auditory tasks in school-age children with cochlear implants. The relationship between the performance on these cognitive skills and reading as well as language outcomes were examined in these children. Ten children between the ages of 7 and 11 years with early-onset bilateral severe-profound hearing loss participated in the study. Auditory and visual short-term memory, auditory and visual working memory subtests and verbal knowledge measures were assessed using the Woodcock Johnson III Tests of Cognitive Abilities, the Wechsler Intelligence Scale for Children-IV Integrated and the Kaufman Assessment Battery for Children II. Reading outcomes were assessed using the Woodcock Reading Mastery Test III. Performance on visual short-term memory and visual working memory measures in children with cochlear implants was within the average range when compared to the normative mean. However, auditory short-term memory and auditory working memory measures were below average when compared to the normative mean. Performance was also below average on all verbal knowledge measures. Regarding reading outcomes, children with cochlear implants scored below average for listening and passage comprehension tasks and these measures were positively correlated to visual short-term memory, visual working memory and auditory short-term memory. Performance on auditory working memory subtests was not related to reading or language outcomes. The children with cochlear implants in this study demonstrated better performance in visual (spatial) working memory and short-term memory skills than in auditory working memory and auditory short-term memory skills. Significant positive relationships were found between visual working memory and reading outcomes. The results of the study provide support for the idea that WM capacity is modality specific in children with hearing loss. Based on these findings, reading instruction that capitalizes on the strengths in visual short-term memory and working memory is suggested for young children with early-onset hearing loss. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Hall, Deborah A; Guest, Hannah; Prendergast, Garreth; Plack, Christopher J; Francis, Susan T
2018-01-01
Background Rodent studies indicate that noise exposure can cause permanent damage to synapses between inner hair cells and high-threshold auditory nerve fibers, without permanently altering threshold sensitivity. These demonstrations of what is commonly known as hidden hearing loss have been confirmed in several rodent species, but the implications for human hearing are unclear. Objective Our Medical Research Council–funded program aims to address this unanswered question, by investigating functional consequences of the damage to the human peripheral and central auditory nervous system that results from cumulative lifetime noise exposure. Behavioral and neuroimaging techniques are being used in a series of parallel studies aimed at detecting hidden hearing loss in humans. The planned neuroimaging study aims to (1) identify central auditory biomarkers associated with hidden hearing loss; (2) investigate whether there are any additive contributions from tinnitus or diminished sound tolerance, which are often comorbid with hearing problems; and (3) explore the relation between subcortical functional magnetic resonance imaging (fMRI) measures and the auditory brainstem response (ABR). Methods Individuals aged 25 to 40 years with pure tone hearing thresholds ≤20 dB hearing level over the range 500 Hz to 8 kHz and no contraindications for MRI or signs of ear disease will be recruited into the study. Lifetime noise exposure will be estimated using an in-depth structured interview. Auditory responses throughout the central auditory system will be recorded using ABR and fMRI. Analyses will focus predominantly on correlations between lifetime noise exposure and auditory response characteristics. Results This paper reports the study protocol. The funding was awarded in July 2013. Enrollment for the study described in this protocol commenced in February 2017 and was completed in December 2017. Results are expected in 2018. Conclusions This challenging and comprehensive study will have the potential to impact diagnostic procedures for hidden hearing loss, enabling early identification of noise-induced auditory damage via the detection of changes in central auditory processing. Consequently, this will generate the opportunity to give personalized advice regarding provision of ear defense and monitoring of further damage, thus reducing the incidence of noise-induced hearing loss. PMID:29523503
Hydrogen protects auditory hair cells from cisplatin-induced free radicals.
Kikkawa, Yayoi S; Nakagawa, Takayuki; Taniguchi, Mirei; Ito, Juichi
2014-09-05
Cisplatin is a widely used chemotherapeutic agent for the treatment of various malignancies. However, its maximum dose is often limited by severe ototoxicity. Cisplatin ototoxicity may require the production of reactive oxygen species (ROS) in the inner ear by activating enzymes specific to the cochlea. Molecular hydrogen was recently established as an antioxidant that selectively reduces ROS, and has been reported to protect the central nervous system, liver, kidney and cochlea from oxidative stress. The purpose of this study was to evaluate the potential of molecular hydrogen to protect cochleae against cisplatin. We cultured mouse cochlear explants in medium containing various concentrations of cisplatin and examined the effects of hydrogen gas dissolved directly into the media. Following 48-h incubation, the presence of intact auditory hair cells was assayed by phalloidin staining. Cisplatin caused hair cell loss in a dose-dependent manner, whereas the addition of hydrogen gas significantly increased the numbers of remaining auditory hair cells. Additionally, hydroxyphenyl fluorescein (HPF) staining of the spiral ganglion showed that formation of hydroxyl radicals was successfully reduced in hydrogen-treated cochleae. These data suggest that molecular hydrogen can protect auditory tissues against cisplatin toxicity, thus providing an additional strategy to protect against drug-induced inner ear damage. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Alderson, R Matt; Kasper, Lisa J; Patros, Connor H G; Hudec, Kristen L; Tarle, Stephanie J; Lea, Sarah E
2015-01-01
The episodic buffer component of working memory was examined in children with attention deficit/hyperactivity disorder (ADHD) and typically developing peers (TD). Thirty-two children (ADHD = 16, TD = 16) completed three versions of a phonological working memory task that varied with regard to stimulus presentation modality (auditory, visual, or dual auditory and visual), as well as a visuospatial task. Children with ADHD experienced the largest magnitude working memory deficits when phonological stimuli were presented via a unimodal, auditory format. Their performance improved during visual and dual modality conditions but remained significantly below the performance of children in the TD group. In contrast, the TD group did not exhibit performance differences between the auditory- and visual-phonological conditions but recalled significantly more stimuli during the dual-phonological condition. Furthermore, relative to TD children, children with ADHD recalled disproportionately fewer phonological stimuli as set sizes increased, regardless of presentation modality. Finally, an examination of working memory components indicated that the largest magnitude between-group difference was associated with the central executive. Collectively, these findings suggest that ADHD-related working memory deficits reflect a combination of impaired central executive and phonological storage/rehearsal processes, as well as an impaired ability to benefit from bound multimodal information processed by the episodic buffer.
Młynarski, Wiktor
2014-01-01
To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644
The sonar aperture and its neural representation in bats.
Heinrich, Melina; Warmbold, Alexander; Hoffmann, Susanne; Firzlaff, Uwe; Wiegrebe, Lutz
2011-10-26
As opposed to visual imaging, biosonar imaging of spatial object properties represents a challenge for the auditory system because its sensory epithelium is not arranged along space axes. For echolocating bats, object width is encoded by the amplitude of its echo (echo intensity) but also by the naturally covarying spread of angles of incidence from which the echoes impinge on the bat's ears (sonar aperture). It is unclear whether bats use the echo intensity and/or the sonar aperture to estimate an object's width. We addressed this question in a combined psychophysical and electrophysiological approach. In three virtual-object playback experiments, bats of the species Phyllostomus discolor had to discriminate simple reflections of their own echolocation calls differing in echo intensity, sonar aperture, or both. Discrimination performance for objects with physically correct covariation of sonar aperture and echo intensity ("object width") did not differ from discrimination performances when only the sonar aperture was varied. Thus, the bats were able to detect changes in object width in the absence of intensity cues. The psychophysical results are reflected in the responses of a population of units in the auditory midbrain and cortex that responded strongest to echoes from objects with a specific sonar aperture, regardless of variations in echo intensity. Neurometric functions obtained from cortical units encoding the sonar aperture are sufficient to explain the behavioral performance of the bats. These current data show that the sonar aperture is a behaviorally relevant and reliably encoded cue for object size in bat sonar.
Khoshkholgh, Roghaie; Keshavarz, Tahereh; Moshfeghy, Zeinab; Akbarzadeh, Marzieh; Asadi, Nasrin; Zare, Najaf
2016-01-01
Objective: To compare the effects of two auditory methods by mother and fetus on the results of NST in 2011-2012. Materials and methods: In this single-blind clinical trial, 213 pregnant women with gestational age of 37-41 weeks who had no pregnancy complications were randomly divided into 3 groups (auditory intervention for mother, auditory intervention for fetus, and control) each containing 71 subjects. In the intervention groups, music was played through the second 10 minutes of NST. The three groups were compared regarding baseline fetal heart rate and number of accelerations in the first and second 10 minutes of NST. The data were analyzed using one-way ANOVA, Kruskal-Wallis, and paired T-test. Results: The results showed no significant difference among the three groups regarding baseline fetal heart rate in the first (p = 0.945) and second (p = 0.763) 10 minutes. However, a significant difference was found among the three groups concerning the number of accelerations in the second 10 minutes. Also, a significant difference was observed in the number of accelerations in the auditory intervention for mother (p = 0.013) and auditory intervention for fetus groups (p < 0.001). The difference between the number of accelerations in the first and second 10 minutes was also statistically significant (p = 0.002). Conclusion: Music intervention was effective in the number of accelerations which is the indicator of fetal health. Yet, further studies are required to be conducted on the issue. PMID:27385971
Responses in Rat Core Auditory Cortex are Preserved during Sleep Spindle Oscillations
Sela, Yaniv; Vyazovskiy, Vladyslav V.; Cirelli, Chiara; Tononi, Giulio; Nir, Yuval
2016-01-01
Study Objectives: Sleep is defined as a reversible state of reduction in sensory responsiveness and immobility. A long-standing hypothesis suggests that a high arousal threshold during non-rapid eye movement (NREM) sleep is mediated by sleep spindle oscillations, impairing thalamocortical transmission of incoming sensory stimuli. Here we set out to test this idea directly by examining sensory-evoked neuronal spiking activity during natural sleep. Methods: We compared neuronal (n = 269) and multiunit activity (MUA), as well as local field potentials (LFP) in rat core auditory cortex (A1) during NREM sleep, comparing responses to sounds depending on the presence or absence of sleep spindles. Results: We found that sleep spindles robustly modulated the timing of neuronal discharges in A1. However, responses to sounds were nearly identical for all measured signals including isolated neurons, MUA, and LFPs (all differences < 10%). Furthermore, in 10% of trials, auditory stimulation led to an early termination of the sleep spindle oscillation around 150–250 msec following stimulus onset. Finally, active ON states and inactive OFF periods during slow waves in NREM sleep affected the auditory response in opposite ways, depending on stimulus intensity. Conclusions: Responses in core auditory cortex are well preserved regardless of sleep spindles recorded in that area, suggesting that thalamocortical sensory relay remains functional during sleep spindles, and that sensory disconnection in sleep is mediated by other mechanisms. Citation: Sela Y, Vyazovskiy VV, Cirelli C, Tononi G, Nir Y. Responses in rat core auditory cortex are preserved during sleep spindle oscillations. SLEEP 2016;39(5):1069–1082. PMID:26856904
Brainstem auditory evoked responses in an equine patient population: part I--adult horses.
Aleman, M; Holliday, T A; Nieto, J E; Williams, D C
2014-01-01
Brainstem auditory evoked response has been an underused diagnostic modality in horses as evidenced by few reports on the subject. To describe BAER findings, common clinical signs, and causes of hearing loss in adult horses. Study group, 76 horses; control group, 8 horses. Retrospective. BAER records from the Clinical Neurophysiology Laboratory were reviewed from the years of 1982 to 2013. Peak latencies, amplitudes, and interpeak intervals were measured when visible. Horses were grouped under disease categories. Descriptive statistics and a posthoc Bonferroni test were performed. Fifty-seven of 76 horses had BAER deficits. There was no breed or sex predisposition, with the exception of American Paint horses diagnosed with congenital sensorineural deafness. Eighty-six percent (n = 49/57) of the horses were younger than 16 years of age. The most common causes of BAER abnormalities were temporohyoid osteoarthropathy (THO, n = 20/20; abnormalities/total), congenital sensorineural deafness in Paint horses (17/17), multifocal brain disease (13/16), and otitis media/interna (4/4). Auditory loss was bilateral and unilateral in 74% (n = 42/57) and 26% (n = 15/57) of the horses, respectively. The most common causes of bilateral auditory loss were sensorineural deafness, THO, and multifocal brain disease whereas THO and otitis were the most common causes of unilateral deficits. Auditory deficits should be investigated in horses with altered behavior, THO, multifocal brain disease, otitis, and in horses with certain coat and eye color patterns. BAER testing is an objective and noninvasive diagnostic modality to assess auditory function in horses. Copyright © 2014 by the American College of Veterinary Internal Medicine.
Hamm, Jordan P; Ethridge, Lauren E; Shapiro, John R; Pearlson, Godfrey D; Tamminga, Carol A; Sweeney, John A; Keshavan, Matcheri S; Thaker, Gunvant K; Clementz, Brett A
2017-01-01
Objectives Bipolar I disorder is a disabling illness affecting 1% of people worldwide. Family and twin studies suggest that psychotic bipolar disorder (BDP) represents a homogenous subgroup with an etiology distinct from non-psychotic bipolar disorder (BDNP) and partially shared with schizophrenia. Studies of auditory electrophysiology [e.g., paired-stimulus and oddball measured with electroencephalography (EEG)] consistently report deviations in psychotic groups (schizophrenia, BDP), yet such studies comparing BDP and BDNP are sparse and, in some cases, conflicting. Auditory EEG responses are significantly reduced in unaffected relatives of psychosis patients, suggesting that they may relate to both psychosis liability and expression. Methods While 64-sensor EEGs were recorded, age- and gender-matched samples of 70 BDP, 35 BDNP {20 with a family history of psychosis [BDNP(+)]}, and 70 psychiatrically healthy subjects were presented typical auditory paired-stimuli and auditory oddball paradigms. Results Oddball P3b reductions were present and indistinguishable across all patient groups. P2s to paired-stimuli were abnormal only in BDP and BDNP(+). Conversely, N1 reductions to stimuli in both paradigms and P3a reductions were present in both BDP and BDNP(−) groups but were absent in BDNP(+). Conclusions While nearly all auditory neural response components studied were abnormal in BDP, BDNP abnormalities at early- and mid-latencies were moderated by family psychosis history. The relationship between psychosis expression, heritable psychosis risk, and neurophysiology within bipolar disorder, therefore, may be complex. Consideration of such clinical disease heterogeneity may be important for future investigations of the pathophysiology of major psychiatric disturbance. PMID:23941660
Dose-dependent suppression by ethanol of transient auditory 40-Hz response.
Jääskeläinen, I P; Hirvonen, J; Saher, M; Pekkonen, E; Sillanaukee, P; Näätänen, R; Tiitinen, H
2000-02-01
Acute alcohol (ethanol) challenge is known to induce various cognitive disturbances, yet the neural basis of the effect is poorly known. The auditory transient evoked gamma-band (40-Hz) oscillatory responses have been suggested to be associated with various perceptual and cognitive functions in humans; however, alcohol effects on auditory 40-Hz responses have not been investigated to date. The objective of the study was to test the dose-related impact of alcohol on auditory transient evoked 40-Hz responses during a selective-attention task. Ten healthy social drinkers ingested, in four separate sessions, 0.00, 0. 25, 0.50, or 0.75 g/kg of 10% (v/v) alcohol solution. The order of the sessions was randomized and a double-blind procedure was employed. During a selective attention task, 300-Hz standard and 330-Hz deviant tones were presented to the left ear, and 1000-Hz standards and 1100-Hz deviants to the right ear of the subjects (P=0. 425 for each standard, P=0.075 for each deviant). The subjects attended to a designated ear, and were to detect the deviants therein while ignoring tones to the other ear. The auditory transient evoked 40-Hz responses elicited by both the attended and unattended standard tones were significantly suppressed by the 0.50 and 0.75 g/kg alcohol doses. Alcohol suppresses auditory transient evoked 40-Hz oscillations already with moderate blood alcohol concentrations. Given the putative role of gamma-band oscillations in cognition, this finding could be associated with certain alcohol-induced cognitive deficits.
Rodriguez, Rosendo A
2004-06-01
Focal neurologic and intellectual deficits or memory problems are relatively frequent after cardiac surgery. These complications have been associated with cerebral hypoperfusion, embolization, and inflammation that occur during or after surgery. Auditory evoked potentials, a neurophysiologic technique that evaluates the function of neural structures from the auditory nerve to the cortex, provide useful information about the functional status of the brain during major cardiovascular procedures. Skepticism regarding the presence of artifacts or difficulty in their interpretation has outweighed considerations of its potential utility and noninvasiveness. This paper reviews the evidence of their potential applications in several aspects of the management of cardiac surgery patients. The sensitivity of auditory evoked potentials to the effects of changes in brain temperature makes them useful for monitoring cerebral hypothermia and rewarming during cardiopulmonary bypass. The close relationship between evoked potential waveforms and specific anatomic structures facilitates the assessment of the functional integrity of the central nervous system in cardiac surgery patients. This feature may also be relevant in the management of critical patients under sedation and coma or in the evaluation of their prognosis during critical care. Their objectivity, reproducibility, and relative insensitivity to learning effects make auditory evoked potentials attractive for the cognitive assessment of cardiac surgery patients. From a clinical perspective, auditory evoked potentials represent an additional window for the study of underlying cerebral processes in healthy and diseased patients. From a research standpoint, this technology offers opportunities for a better understanding of the particular cerebral deficits associated with patients who are undergoing major cardiovascular procedures.
Electrostimulation mapping of comprehension of auditory and visual words.
Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François
2015-10-01
In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cognitive mechanisms associated with auditory sensory gating
Jones, L.A.; Hills, P.J.; Dick, K.M.; Jones, S.P.; Bright, P.
2016-01-01
Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification. PMID:26716891
Dynamic sound localization in cats
Ruhland, Janet L.; Jones, Amy E.
2015-01-01
Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts. PMID:26063772
Auditory cortex of newborn bats is prewired for echolocation.
Kössl, Manfred; Voss, Cornelia; Mora, Emanuel C; Macias, Silvio; Foeller, Elisabeth; Vater, Marianne
2012-04-10
Neuronal computation of object distance from echo delay is an essential task that echolocating bats must master for spatial orientation and the capture of prey. In the dorsal auditory cortex of bats, neurons specifically respond to combinations of short frequency-modulated components of emitted call and delayed echo. These delay-tuned neurons are thought to serve in target range calculation. It is unknown whether neuronal correlates of active space perception are established by experience-dependent plasticity or by innate mechanisms. Here we demonstrate that in the first postnatal week, before onset of echolocation and flight, dorsal auditory cortex already contains functional circuits that calculate distance from the temporal separation of a simulated pulse and echo. This innate cortical implementation of a purely computational processing mechanism for sonar ranging should enhance survival of juvenile bats when they first engage in active echolocation behaviour and flight.
Electroencephalographic and behavioral effects of nocturnally occurring jet aircraft sounds.
NASA Technical Reports Server (NTRS)
Levere, T. E.; Bartus, R. T.; Hart, F. D.
1972-01-01
The present research presents data relative to the objective evaluation of the effects of a specific complex auditory stimulus presented during sleep. The auditory stimulus was a jet aircraft flyover of approximately 20-sec duration and a peak intensity level of approximately 80 dB (A). Our specific interests were in terms of how this stimulus would interact with the frequency pattern of the sleeping EEG and whether there would be any carry-over effects of the nocturnally presented stimuli to the waking state. The results indicated that the physiological effects (changes in electroencephalographic activity) produced by the jet aircraft stimuli outlasted the physical presence of the auditory stimuli by a considerable degree. Further, it was possible to note both behavioral and electroencephalographic changes during waking performances subsequent to nights disturbed by the jet aircraft flyovers which were not apparent during performances subsequent to undisturbed nights.
Auditory and visual cortex of primates: a comparison of two sensory systems
Rauschecker, Josef P.
2014-01-01
A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separating the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features on the columnar level are direction selectivity, size/bandwidth selectivity, as well as receptive fields with segregated versus overlapping on- and off-sub-regions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: 1) identification of objects and 2) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independent of sensory modality. PMID:25728177
Relational Learning in Children with Deafness and Cochlear Implants
ERIC Educational Resources Information Center
Almeida-Verdu, Ana Claudia; Huziwara, Edson M.; de Souza, Deisy G.; de Rose, Julio C.; Bevilacqua, Maria Cecilia; Lopes, Jair, Jr.; Alves, Cristiane O.; McIlvane, William J.
2008-01-01
This four-experiment series sought to evaluate the potential of children with neurosensory deafness and cochlear implants to exhibit auditory-visual and visual-visual stimulus equivalence relations within a matching-to-sample format. Twelve children who became deaf prior to acquiring language (prelingual) and four who became deaf afterwards…
Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle
NASA Astrophysics Data System (ADS)
Oppenheim, Jacob N.; Magnasco, Marcelo O.
2013-01-01
The time-frequency uncertainty principle states that the product of the temporal and frequency extents of a signal cannot be smaller than 1/(4π). We study human ability to simultaneously judge the frequency and the timing of a sound. Our subjects often exceeded the uncertainty limit, sometimes by more than tenfold, mostly through remarkable timing acuity. Our results establish a lower bound for the nonlinearity and complexity of the algorithms employed by our brains in parsing transient sounds, rule out simple “linear filter” models of early auditory processing, and highlight timing acuity as a central feature in auditory object processing.
Zakaria, Mohd Normani; Jalaei, Bahram
2017-11-01
Auditory brainstem responses evoked by complex stimuli such as speech syllables have been studied in normal subjects and subjects with compromised auditory functions. The stability of speech-evoked auditory brainstem response (speech-ABR) when tested over time has been reported but the literature is limited. The present study was carried out to determine the test-retest reliability of speech-ABR in healthy children at a low sensation level. Seventeen healthy children (6 boys, 11 girls) aged from 5 to 9 years (mean = 6.8 ± 3.3 years) were tested in two sessions separated by a 3-month period. The stimulus used was a 40-ms syllable /da/ presented at 30 dB sensation level. As revealed by pair t-test and intra-class correlation (ICC) analyses, peak latencies, peak amplitudes and composite onset measures of speech-ABR were found to be highly replicable. Compared to other parameters, higher ICC values were noted for peak latencies of speech-ABR. The present study was the first to report the test-retest reliability of speech-ABR recorded at low stimulation levels in healthy children. Due to its good stability, it can be used as an objective indicator for assessing the effectiveness of auditory rehabilitation in hearing-impaired children in future studies. Copyright © 2017 Elsevier B.V. All rights reserved.
An assessment of auditory-guided locomotion in an obstacle circumvention task.
Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina
2016-06-01
This study investigated how effectively audition can be used to guide navigation around an obstacle. Ten blindfolded normally sighted participants navigated around a 0.6 × 2 m obstacle while producing self-generated mouth click sounds. Objective movement performance was measured using a Vicon motion capture system. Performance with full vision without generating sound was used as a baseline for comparison. The obstacle's location was varied randomly from trial to trial: it was either straight ahead or 25 cm to the left or right relative to the participant. Although audition provided sufficient information to detect the obstacle and guide participants around it without collision in the majority of trials, buffer space (clearance between the shoulder and obstacle), overall movement times, and number of velocity corrections were significantly (p < 0.05) greater with auditory guidance than visual guidance. Collisions sometime occurred under auditory guidance, suggesting that audition did not always provide an accurate estimate of the space between the participant and obstacle. Unlike visual guidance, participants did not always walk around the side that afforded the most space during auditory guidance. Mean buffer space was 1.8 times higher under auditory than under visual guidance. Results suggest that sound can be used to generate buffer space when vision is unavailable, allowing navigation around an obstacle without collision in the majority of trials.
Visual motion disambiguation by a subliminal sound.
Dufour, Andre; Touzalin, Pascale; Moessinger, Michèle; Brochard, Renaud; Després, Olivier
2008-09-01
There is growing interest in the effect of sound on visual motion perception. One model involves the illusion created when two identical objects moving towards each other on a two-dimensional visual display can be seen to either bounce off or stream through each other. Previous studies show that the large bias normally seen toward the streaming percept can be modulated by the presentation of an auditory event at the moment of coincidence. However, no reports to date provide sufficient evidence to indicate whether the sound bounce-inducing effect is due to a perceptual binding process or merely to an explicit inference resulting from the transient auditory stimulus resembling a physical collision of two objects. In the present study, we used a novel experimental design in which a subliminal sound was presented either 150 ms before, at, or 150 ms after the moment of coincidence of two disks moving towards each other. The results showed that there was an increased perception of bouncing (rather than streaming) when the subliminal sound was presented at or 150 ms after the moment of coincidence compared to when no sound was presented. These findings provide the first empirical demonstration that activation of the human auditory system without reaching consciousness affects the perception of an ambiguous visual motion display.
Audio-Visual, Visuo-Tactile and Audio-Tactile Correspondences in Preschoolers.
Nava, Elena; Grassi, Massimo; Turati, Chiara
2016-01-01
Interest in crossmodal correspondences has recently seen a renaissance thanks to numerous studies in human adults. Yet, still very little is known about crossmodal correspondences in children, particularly in sensory pairings other than audition and vision. In the current study, we investigated whether 4-5-year-old children match auditory pitch to the spatial motion of visual objects (audio-visual condition). In addition, we investigated whether this correspondence extends to touch, i.e., whether children also match auditory pitch to the spatial motion of touch (audio-tactile condition) and the spatial motion of visual objects to touch (visuo-tactile condition). In two experiments, two different groups of children were asked to indicate which of two stimuli fitted best with a centrally located third stimulus (Experiment 1), or to report whether two presented stimuli fitted together well (Experiment 2). We found sensitivity to the congruency of all of the sensory pairings only in Experiment 2, suggesting that only under specific circumstances can these correspondences be observed. Our results suggest that pitch-height correspondences for audio-visual and audio-tactile combinations may still be weak in preschool children, and speculate that this could be due to immature linguistic and auditory cues that are still developing at age five.
Constantinidou, Fofi; Evripidou, Christiana
2012-01-01
This study investigated the effects of stimulus presentation modality on working memory performance in children with reading disabilities (RD) and in typically developing children (TDC), all native speakers of Greek. It was hypothesized that the visual presentation of common objects would result in improved learning and recall performance as compared to the auditory presentation of stimuli. Twenty children, ages 10-12, diagnosed with RD were matched to 20 TDC age peers. The experimental tasks implemented a multitrial verbal learning paradigm incorporating three modalities: auditory, visual, and auditory plus visual. Significant group differences were noted on language, verbal and nonverbal memory, and measures of executive abilities. A mixed-model MANOVA indicated that children with RD had a slower learning curve and recalled fewer words than TDC across experimental modalities. Both groups of participants benefited from the visual presentation of objects; however, children with RD showed the greatest gains during this condition. In conclusion, working memory for common verbal items is impaired in children with RD; however, performance can be facilitated, and learning efficiency maximized, when information is presented visually. The results provide further evidence for the pictorial superiority hypothesis and the theory that pictorial presentation of verbal stimuli is adequate for dual coding.
Text as a Supplement to Speech in Young and Older Adults a)
Krull, Vidya; Humes, Larry E.
2015-01-01
Objective The purpose of this experiment was to quantify the contribution of visual text to auditory speech recognition in background noise. Specifically, we tested the hypothesis that partially accurate visual text from an automatic speech recognizer could be used successfully to supplement speech understanding in difficult listening conditions in older adults, with normal or impaired hearing. Our working hypotheses were based on what is known regarding audiovisual speech perception in the elderly from speechreading literature. We hypothesized that: 1) combining auditory and visual text information will result in improved recognition accuracy compared to auditory or visual text information alone; 2) benefit from supplementing speech with visual text (auditory and visual enhancement) in young adults will be greater than that in older adults; and 3) individual differences in performance on perceptual measures would be associated with cognitive abilities. Design Fifteen young adults with normal hearing, fifteen older adults with normal hearing, and fifteen older adults with hearing loss participated in this study. All participants completed sentence recognition tasks in auditory-only, text-only, and combined auditory-text conditions. The auditory sentence stimuli were spectrally shaped to restore audibility for the older participants with impaired hearing. All participants also completed various cognitive measures, including measures of working memory, processing speed, verbal comprehension, perceptual and cognitive speed, processing efficiency, inhibition, and the ability to form wholes from parts. Group effects were examined for each of the perceptual and cognitive measures. Audiovisual benefit was calculated relative to performance on auditory-only and visual-text only conditions. Finally, the relationship between perceptual measures and other independent measures were examined using principal-component factor analyses, followed by regression analyses. Results Both young and older adults performed similarly on nine out of ten perceptual measures (auditory, visual, and combined measures). Combining degraded speech with partially correct text from an automatic speech recognizer improved the understanding of speech in both young and older adults, relative to both auditory- and text-only performance. In all subjects, cognition emerged as a key predictor for a general speech-text integration ability. Conclusions These results suggest that neither age nor hearing loss affected the ability of subjects to benefit from text when used to support speech, after ensuring audibility through spectral shaping. These results also suggest that the benefit obtained by supplementing auditory input with partially accurate text is modulated by cognitive ability, specifically lexical and verbal skills. PMID:26458131
ERIC Educational Resources Information Center
Teng, Santani; Whitney, David
2011-01-01
Echolocation is a specialized application of spatial hearing that uses reflected auditory information to localize objects and represent the external environment. Although it has been documented extensively in nonhuman species, such as bats and dolphins, its use by some persons who are blind as a navigation and object-identification aid has…
Categorization in 3- and 4-Month-Old Infants: An Advantage of Words over Tones
ERIC Educational Resources Information Center
Ferry, Alissa L.; Hespos, Susan J.; Waxman, Sandra R.
2010-01-01
Neonates prefer human speech to other nonlinguistic auditory stimuli. However, it remains an open question whether there are any conceptual consequences of words on object categorization in infants younger than 6 months. The current study examined the influence of words and tones on object categorization in forty-six 3- to 4-month-old infants.…
Corona-Strauss, Farah I; Delb, Wolfgang; Bloching, Marc; Strauss, Daniel J
2008-01-01
We have recently shown that click evoked auditory brainstem responses (ABRs) single sweeps can efficiently be processed by a hybrid novelty detection system. This approach allowed for the objective detection of hearing thresholds in a fraction of time of conventional schemes, making it appropriate for the efficient implementation of newborn hearing screening procedures. It is the objective of this study to evaluate whether this approach might further be improved by different stimulation paradigms and electrode settings. In particular, we evaluate chirp stimulations which compensate the basilar-membrane dispersion and active electrodes which are less sensitive to movements. This is the first study which is directed to a single sweep processing of chirp evoked ABRs. By concentrating on transparent features and a minimum number of adjustable parameters, we present an objective comparison of click vs.chirp stimulations and active vs. passive electrodes in the ultrafast ABR detection. We show that chirp evoked brainstem responses and active electrodes might improve the single sweeps analysis of ABRs.Consequently, we conclude that a single sweep processing of ABRs for the objective determination of hearing thresholds can further be improved by the use of optimized chirp stimulations and active electrodes.
2012-01-01
Background Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues) have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user’s mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. Methods 10 male subjects (26+/-years old), participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF), Visual Feedback only control (VF), and Audiovisual Feedback control (AVF). For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject’s EEG, ECG, electro-dermal activity (EDA), and respiration rate were measured. Results The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback). Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. Conclusions The performance improvements when using auditory cues, along with vision (multimodal feedback), can be attributed to a reduced attentional demand during the task, which can be attributed to a visual “pop-out” or enhance effect. Also, the NASA TLX, the EEG’s Alpha and Beta band, and the Heart Rate could be used to further evaluate sensory feedback systems in prosthetic applications. PMID:22682425
Wang, Danying; Clouter, Andrew; Chen, Qiaoyu; Shapiro, Kimron L; Hanslmayr, Simon
2018-06-13
Episodic memories are rich in sensory information and often contain integrated information from different sensory modalities. For instance, we can store memories of a recent concert with visual and auditory impressions being integrated in one episode. Theta oscillations have recently been implicated in playing a causal role synchronizing and effectively binding the different modalities together in memory. However, an open question is whether momentary fluctuations in theta synchronization predict the likelihood of associative memory formation for multisensory events. To address this question we entrained the visual and auditory cortex at theta frequency (4 Hz) and in a synchronous or asynchronous manner by modulating the luminance and volume of movies and sounds at 4 Hz, with a phase offset at 0° or 180°. EEG activity from human subjects (both sexes) was recorded while they memorized the association between a movie and a sound. Associative memory performance was significantly enhanced in the 0° compared to the 180° condition. Source-level analysis demonstrated that the physical stimuli effectively entrained their respective cortical areas with a corresponding phase offset. The findings suggested a successful replication of a previous study (Clouter et al., 2017). Importantly, the strength of entrainment during encoding correlated with the efficacy of associative memory such that small phase differences between visual and auditory cortex predicted a high likelihood of correct retrieval in a later recall test. These findings suggest that theta oscillations serve a specific function in the episodic memory system: Binding the contents of different modalities into coherent memory episodes. SIGNIFICANCE STATEMENT How multi-sensory experiences are bound to form a coherent episodic memory representation is one of the fundamental questions in human episodic memory research. Evidence from animal literature suggests that the relative timing between an input and theta oscillations in the hippocampus is crucial for memory formation. We precisely controlled the timing between visual and auditory stimuli and the neural oscillations at 4 Hz using a multisensory entrainment paradigm. Human associative memory formation depends on coincident timing between sensory streams processed by the corresponding brain regions. We provide evidence for a significant role of relative timing of neural theta activity in human episodic memory on a single trial level, which reveals a crucial mechanism underlying human episodic memory. Copyright © 2018 the authors.
He, Shuman; Grose, John H.; Teagle, Holly F.B.; Woodard, Jennifer; Park, Lisa R.; Hatch, Debora R.; Roush, Patricia; Buchman, Craig A.
2014-01-01
Objective: The overall aim of the study was to evaluate the feasibility of using electrophysiological measures of the auditory change complex (ACC) to identify candidates for cochlear implantation in children with auditory neuropathy spectrum disorder (ANSD). In order to achieve this overall aim, this study 1) assessed the feasibility of measuring the ACC evoked by temporal gaps in a group of children with ANSD across a wide age range; and 2) investigated the association between gap detection thresholds (GDTs) measured by the ACC recordings and open-set speech-perception performance in these subjects. Design: Nineteen children with bilateral ANSD ranging in age between 1.9 to 14.9 yrs (mean: 7.8 yrs) participated in this study. Electrophysiological recordings of the auditory event-related potential (ERP), including the onset ERP response and the ACC, were completed in all subjects and open-set speech perception was evaluated for a subgroup of sixteen subjects. For the ERP recordings, the stimulus was a Gaussian noise presented through ER-3A insert earphones to the test ear. Two stimulation conditions were used. In the “control condition,” the stimulus was an 800-ms Gaussian noise. In the “gapped condition”, the stimuli were two noise segments, each being 400 ms in duration, separated by one of five gaps (i.e. 5, 10, 20, 50, or 100 ms). The inter-stimulation interval was 1200 ms. The aided open-set speech perception ability was assessed using the Phonetically Balanced Kindergarten (PBK) word lists presented at 60 dB SPL using recorded testing material in a sound booth. For speech perception tests, subjects wore their hearing aids at the settings recommended by their clinical audiologists. For a subgroup of five subjects, psychophysical gap detection thresholds for the Gaussian noise were also assessed using a three-interval, three-alternative forced-choice procedure. Results: Responses evoked by the onset of the Gaussian noise (i.e. onset responses) were recorded in all stimulation conditions from all subjects tested in this study. The presence/absence, peak latency and amplitude, and response width of the onset response did not correlate with aided PBK word scores. The objective GDTs measured with the ACC recordings from seventeen subjects ranged from 10 to 100 ms. The ACC was not recorded from two subjects for any gap durations tested in this study. There was a robust negative correlation between objective GDTs and aided PBK word scores. In general, subjects with prolonged objective GDTs showed low aided PBK word scores. GDTs measured using electrophysiological recordings of the ACC correlated well with those measured using psychophysical procedures in four of five subjects who were evaluated using both procedures. Conclusions: The clinical application of the onset response in predicting open-set speech-perception ability is relatively limited in children with ANSD. The ACC recordings can be used to objectively evaluate temporal resolution abilities in children with ANSD having no severe comorbidities, and who are older than 1.9 years. The ACC can potentially be used as an objective tool to identify poor performers among children with ANSD using properly fit amplification, and who are thus, cochlear implant candidates. PMID:25422994
Brainstem origins for cortical 'what' and 'where' pathways in the auditory system.
Kraus, Nina; Nicol, Trent
2005-04-01
We have developed a data-driven conceptual framework that links two areas of science: the source-filter model of acoustics and cortical sensory processing streams. The source-filter model describes the mechanics behind speech production: the identity of the speaker is carried largely in the vocal cord source and the message is shaped by the ever-changing filters of the vocal tract. Sensory processing streams, popularly called 'what' and 'where' pathways, are well established in the visual system as a neural scheme for separately carrying different facets of visual objects, namely their identity and their position/motion, to the cortex. A similar functional organization has been postulated in the auditory system. Both speaker identity and the spoken message, which are simultaneously conveyed in the acoustic structure of speech, can be disentangled into discrete brainstem response components. We argue that these two response classes are early manifestations of auditory 'what' and 'where' streams in the cortex. This brainstem link forges a new understanding of the relationship between the acoustics of speech and cortical processing streams, unites two hitherto separate areas in science, and provides a model for future investigations of auditory function.
The 'F-complex' and MMN tap different aspects of deviance.
Laufer, Ilan; Pratt, Hillel
2005-02-01
To compare the 'F(fusion)-complex' with the Mismatch negativity (MMN), both components associated with automatic detection of changes in the acoustic stimulus flow. Ten right-handed adult native Hebrew speakers discriminated vowel-consonant-vowel (V-C-V) sequences /ada/ (deviant) and /aga/ (standard) in an active auditory 'Oddball' task, and the brain potentials associated with performance of the task were recorded from 21 electrodes. Stimuli were generated by fusing the acoustic elements of the V-C-V sequences as follows: base was always presented in front of the subject, and formant transitions were presented to the front, left or right in a virtual reality room. An illusion of a lateralized echo (duplex sensation) accompanied base fusion with the lateralized formant locations. Source current density estimates were derived for the net response to the fusion of the speech elements (F-complex) and for the MMN, using low-resolution electromagnetic tomography (LORETA). Statistical non-parametric mapping was used to estimate the current density differences between the brain sources of the F-complex and the MMN. Occipito-parietal regions and prefrontal regions were associated with the F-complex in all formant locations, whereas the vicinity of the supratemporal plane was bilaterally associated with the MMN, but only in case of front-fusion (no duplex effect). MMN is sensitive to the novelty of the auditory object in relation to other stimuli in a sequence, whereas the F-complex is sensitive to the acoustic features of the auditory object and reflects a process of matching them with target categories. The F-complex and MMN reflect different aspects of auditory processing in a stimulus-rich and changing environment: content analysis of the stimulus and novelty detection, respectively.
Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.
Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei
2016-01-13
An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.
Koulaguina, Elena; Drisdelle, Brandi Lee; Alain, Claude; Grimault, Stephan; Eck, Douglas; Vachon, François; Jolicoeur, Pierre
2015-04-01
When the frequency of one harmonic, in a sound composed of many harmonics, is briefly mistuned and then returned to the 'in-tune' frequency and phase, observers report hearing this harmonic as a separate tone long after the brief period of mistuning - a phenomenon called harmonic enhancement. Here, we examined the consequence of harmonic enhancement on listeners' ability to detect a brief amplitude notch embedded in one of the harmonics after the period of mistuning. When present, the notch was either on the enhanced harmonic or on a different harmonic. Detection was better on the enhanced harmonic than on a non-enhanced harmonic. This finding suggests that attention was drawn to the enhanced harmonic (which constituted a new sound object) thereby easing the processing of sound features (i.e., a notch) within that object. This is the first evidence of a functional consequence of the after-effect of transient mistuning on auditory perception. Moreover, the findings provide support for an attention-based explanation of the enhancement phenomenon.
Human brain regions involved in recognizing environmental sounds.
Lewis, James W; Wightman, Frederic L; Brefczynski, Julie A; Phinney, Raymond E; Binder, Jeffrey R; DeYoe, Edgar A
2004-09-01
To identify the brain regions preferentially involved in environmental sound recognition (comprising portions of a putative auditory 'what' pathway), we collected functional imaging data while listeners attended to a wide range of sounds, including those produced by tools, animals, liquids and dropped objects. These recognizable sounds, in contrast to unrecognizable, temporally reversed control sounds, evoked activity in a distributed network of brain regions previously associated with semantic processing, located predominantly in the left hemisphere, but also included strong bilateral activity in posterior portions of the middle temporal gyri (pMTG). Comparisons with earlier studies suggest that these bilateral pMTG foci partially overlap cortex implicated in high-level visual processing of complex biological motion and recognition of tools and other artifacts. We propose that the pMTG foci process multimodal (or supramodal) information about objects and object-associated motion, and that this may represent 'action' knowledge that can be recruited for purposes of recognition of familiar environmental sound-sources. These data also provide a functional and anatomical explanation for the symptoms of pure auditory agnosia for environmental sounds reported in human lesion studies.
PSEN1 and PSEN2 gene expression in Alzheimer's disease brain: a new approach.
Delabio, Roger; Rasmussen, Lucas; Mizumoto, Igor; Viani, Gustavo-Arruda; Chen, Elizabeth; Villares, João; Costa, Isabela-Bazzo; Turecki, Gustavo; Linde, Sandra Aparecido; Smith, Marilia Cardoso; Payão, Spencer-Luiz
2014-01-01
Presenilin 1 (PSEN1) and presenilin 2 (PSEN2) genes encode the major component of y-secretase, which is responsible for sequential proteolytic cleavages of amyloid precursor proteins and the subsequent formation of amyloid-β peptides. 150 RNA samples from the entorhinal cortex, auditory cortex and hippocampal regions of individuals with Alzheimer's disease (AD) and controls elderly subjects were analyzed with using real-time rtPCR. There were no differences between groups for PSEN1 expression. PSEN2 was significantly downregulated in the auditory cortex of AD patients when compared to controls and when compared to other brain regions of the patients. Alteration in PSEN2 expression may be a risk factor for AD.
Manipulating cell fate in the cochlea: a feasible therapy for hearing loss
Fujioka, Masato; Okano, Hideyuki; Edge, Albert SB
2015-01-01
Mammalian auditory hair cells do not spontaneously regenerate, unlike hair cells in lower vertebrates including fish and birds. In mammals, hearing loss due to the loss of hair cells is thus permanent and intractable. Recent studies in the mouse have demonstrated spontaneous hair cell regeneration during a short postnatal period, but this regenerative capacity is lost in the adult cochlea. Reduced regeneration coincides with a transition that results in a decreased pool of progenitor cells in the cochlear sensory epithelium. Here, we review the signaling cascades involved in hair cell formation and morphogenesis of the organ of Corti in developing mammals, the changing status of progenitor cells in the cochlea, and the regeneration of auditory hair cells in adult mammals. PMID:25593106
Impairments in Fear Conditioning in Mice Lacking the nNOS Gene
ERIC Educational Resources Information Center
Kelley, Jonathan B.; Balda, Mara A.; Anderson, Karen L.; Itzhak, Yossef
2009-01-01
The fear conditioning paradigm is used to investigate the roles of various genes, neurotransmitters, and substrates in the formation of fear learning related to contextual and auditory cues. In the brain, nitric oxide (NO) produced by neuronal nitric oxide synthase (nNOS) functions as a retrograde neuronal messenger that facilitates synaptic…
Order of Stimulus Presentation Influences Children's Acquisition in Receptive Identification Tasks
ERIC Educational Resources Information Center
Petursdottir, Anna Ingeborg; Aguilar, Gabriella
2016-01-01
Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition…
Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy
2012-06-01
Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.
Auditory-neurophysiological responses to speech during early childhood: Effects of background noise
White-Schwoch, Travis; Davies, Evan C.; Thompson, Elaine C.; Carr, Kali Woodruff; Nicol, Trent; Bradlow, Ann R.; Kraus, Nina
2015-01-01
Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But learning rarely occurs under ideal listening conditions—children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3–5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features—even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response properties in this age group. These normative metrics may be useful clinically to evaluate auditory processing difficulties during early childhood. PMID:26113025
Laser Stimulation of Single Auditory Nerve Fibers
Littlefield, Philip D.; Vujanovic, Irena; Mundi, Jagmeet; Matic, Agnella Izzo; Richter, Claus-Peter
2011-01-01
Objectives/Hypothesis One limitation with cochlear implants is the difficulty stimulating spatially discrete spiral ganglion cell groups because of electrode interactions. Multipolar electrodes have improved on this some, but also at the cost of much higher device power consumption. Recently, it has been shown that spatially selective stimulation of the auditory nerve is possible with a mid-infrared laser aimed at the spiral ganglion via the round window. However, these neurons must be driven at adequate rates for optical radiation to be useful in cochlear implants. We herein use single-fiber recordings to characterize the responses of auditory neurons to optical radiation. Study Design In vivo study using normal-hearing adult gerbils. Methods Two diode lasers were used for stimulation of the auditory nerve. They operated between 1.844 μm and 1.873 μm, with pulse durations of 35 μs to 1,000 μs, and at repetition rates up to 1,000 pulses per second (pps). The laser outputs were coupled to a 200-μm-diameter optical fiber placed against the round window membrane and oriented toward the spiral ganglion. The auditory nerve was exposed through a craniotomy, and recordings were taken from single fibers during acoustic and laser stimulation. Results Action potentials occurred 2.5 ms to 4.0 ms after the laser pulse. The latency jitter was up to 3 ms. Maximum rates of discharge averaged 97 ± 52.5 action potentials per second. The neurons did not strictly respond to the laser at stimulation rates over 100 pps. Conclusions Auditory neurons can be stimulated by a laser beam passing through the round window membrane and driven at rates sufficient for useful auditory information. Optical stimulation and electrical stimulation have different characteristics; which could be selectively exploited in future cochlear implants. Level of Evidence Not applicable. PMID:20830761
Impact of olfactory and auditory priming on the attraction to foods with high energy density.
Chambaron, S; Chisin, Q; Chabanet, C; Issanchou, S; Brand, G
2015-12-01
\\]\\Recent research suggests that non-attentively perceived stimuli may significantly influence consumers' food choices. The main objective of the present study was to determine whether an olfactory prime (a sweet-fatty odour) and a semantic auditory prime (a nutritional prevention message), both presented incidentally, either alone or in combination can influence subsequent food choices. The experiment included 147 participants who were assigned to four different conditions: a control condition, a scented condition, an auditory condition or an auditory-scented condition. All participants remained in the waiting room during15 min while they performed a 'lure' task. For the scented condition, the participants were unobtrusively exposed to a 'pain au chocolat' odour. Those in the auditory condition were exposed to an audiotape including radio podcasts and a nutritional message. A third group of participants was exposed to both olfactory and auditory stimuli simultaneously. In the control condition, no stimulation was given. Following this waiting period, all participants moved into a non-odorised test room where they were asked to choose, from dishes served buffet-style, the starter, main course and dessert that they would actually eat for lunch. The results showed that the participants primed with the odour of 'pain au chocolat' tended to choose more desserts with high energy density (i.e., a waffle) than the participants in the control condition (p = 0.06). Unexpectedly, the participants primed with the nutritional auditory message chose to consume more desserts with high energy density than the participants in the control condition (p = 0.03). In the last condition (odour and nutritional message), they chose to consume more desserts with high energy density than the participants in the control condition (p = 0.01), and the data reveal an additive effect of the two primes. Copyright © 2015 Elsevier Ltd. All rights reserved.
McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia
2015-01-01
Objectives Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HA) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children’s auditory experience on parent-report auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Design Parent ratings on auditory development questionnaires and children’s speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years of age. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children Rating Scale, and an adaptation of the Speech, Spatial and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open and Closed set task, Early Speech Perception Test, Lexical Neighborhood Test, and Phonetically-balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared to peers with normal hearing matched for age, maternal educational level and nonverbal intelligence. The effects of aided audibility, HA use and language ability on parent responses to auditory development questionnaires and on children’s speech recognition were also examined. Results Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use and better language abilities generally had higher parent ratings of auditory skills and better speech recognition abilities in quiet and in noise than peers with less audibility, more limited HA use or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Conclusions Children who are hard of hearing continue to experience delays in auditory skill development and speech recognition abilities compared to peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported prior to the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech recognition abilities, and may also enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children’s speech recognition. PMID:26731160
Corley, Michael J; Caruso, Michael J; Takahashi, Lorey K
2012-01-18
Posttraumatic stress disorder (PTSD) is characterized by stress-induced symptoms including exaggerated fear memories, hypervigilance and hyperarousal. However, we are unaware of an animal model that investigates these hallmarks of PTSD especially in relation to fear extinction and habituation. Therefore, to develop a valid animal model of PTSD, we exposed rats to different intensities of footshock stress to determine their effects on either auditory predator odor fear extinction or habituation of fear sensitization. In Experiment 1, rats were exposed to acute footshock stress (no shock control, 0.4 mA, or 0.8 mA) immediately prior to auditory fear conditioning training involving the pairing of auditory clicks with a cloth containing cat odor. When presented to the conditioned auditory clicks in the next 5 days of extinction testing conducted in a runway apparatus with a hide box, rats in the two shock groups engaged in higher levels of freezing and head out vigilance-like behavior from the hide box than the no shock control group. This increase in fear behavior during extinction testing was likely due to auditory activation of the conditioned fear state because Experiment 2 demonstrated that conditioned fear behavior was not broadly increased in the absence of the conditioned auditory stimulus. Experiment 3 was then conducted to determine whether acute exposure to stress induces a habituation resistant sensitized fear state. We found that rats exposed to 0.8 mA footshock stress and subsequently tested for 5 days in the runway hide box apparatus with presentations of nonassociative auditory clicks exhibited high initial levels of freezing, followed by head out behavior and culminating in the occurrence of locomotor hyperactivity. In addition, Experiment 4 indicated that without delivery of nonassociative auditory clicks, 0.8 mA footshock stressed rats did not exhibit robust increases in sensitized freezing and locomotor hyperactivity, albeit head out vigilance-like behavior continued to be observed. In summary, our animal model provides novel information on the effects of different intensities of footshock stress, auditory-predator odor fear conditioning, and their interactions on facilitating either extinction-resistant or habituation-resistant fear-related behavior. These results lay the foundation for exciting new investigations of the hallmarks of PTSD that include the stress-induced formation and persistence of traumatic memories and sensitized fear. Copyright © 2011 Elsevier Inc. All rights reserved.
Rao, Aparna; Rishiq, Dania; Yu, Luodi; Zhang, Yang; Abrams, Harvey
The objectives of this study were to investigate the effects of hearing aid use and the effectiveness of ReadMyQuips (RMQ), an auditory training program, on speech perception performance and auditory selective attention using electrophysiological measures. RMQ is an audiovisual training program designed to improve speech perception in everyday noisy listening environments. Participants were adults with mild to moderate hearing loss who were first-time hearing aid users. After 4 weeks of hearing aid use, the experimental group completed RMQ training in 4 weeks, and the control group received listening practice on audiobooks during the same period. Cortical late event-related potentials (ERPs) and the Hearing in Noise Test (HINT) were administered at prefitting, pretraining, and post-training to assess effects of hearing aid use and RMQ training. An oddball paradigm allowed tracking of changes in P3a and P3b ERPs to distractors and targets, respectively. Behavioral measures were also obtained while ERPs were recorded from participants. After 4 weeks of hearing aid use but before auditory training, HINT results did not show a statistically significant change, but there was a significant P3a reduction. This reduction in P3a was correlated with improvement in d prime (d') in the selective attention task. Increased P3b amplitudes were also correlated with improvement in d' in the selective attention task. After training, this correlation between P3b and d' remained in the experimental group, but not in the control group. Similarly, HINT testing showed improved speech perception post training only in the experimental group. The criterion calculated in the auditory selective attention task showed a reduction only in the experimental group after training. ERP measures in the auditory selective attention task did not show any changes related to training. Hearing aid use was associated with a decrement in involuntary attention switch to distractors in the auditory selective attention task. RMQ training led to gains in speech perception in noise and improved listener confidence in the auditory selective attention task.
Shaheen, Elham Ahmed; Shohdy, Sahar Saad; Abd Al Raouf, Mahmoud; Mohamed El Abd, Shereen; Abd Elhamid, Asmss
2011-09-01
Specific language impairment is a relatively common developmental condition in which a child fails to develop language at the typical rate despite normal general intellectual abilities, adequate exposure to language, and in the absence of hearing impairments, or neurological or psychiatric disorders. There is much controversy about the extent to which the auditory processing deficits are important in the genesis specific language impairment. The objective of this paper is to assess the higher cortical functions in children with specific language impairment, through assessing neurophysiological changes in order to correlate the results with the clinical picture of the patients to choose the proper rehabilitation training program. This study was carried out on 40 children diagnosed to have specific language impairment and 20 normal children as a control group. All children were subjected to the assessment protocol applied in Kasr El-Aini hospital. They were also subjected to a language test (receptive, expressive and total language items), the audio-vocal items of Illinois test of psycholinguistic (auditory reception, auditory association, verbal expression, grammatical closure, auditory sequential memory and sound blending) as well as audiological assessment that included peripheral audiological and P300 amplitude and latency assessment. The results revealed a highly significant difference in P300 amplitude and latency between specific language impairment group and control group. There is also strong correlations between P300 latency and the grammatical closure, auditory sequential memory and sound blending, while significant correlation between the P300 amplitude and auditory association and verbal expression. Children with specific language impairment, in spite of the normal peripheral hearing, have evidence of cognitive and central auditory processing defects as evidenced by P300 auditory event related potential in the form of prolonged latency which indicate a slow rate of processing and defective memory as evidenced by small amplitude. These findings affect cognitive and language development in specific language impairment children and should be considered during planning the intervention program. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Auditory Inhibition of Rapid Eye Movements and Dream Recall from REM Sleep
Stuart, Katrina; Conduit, Russell
2009-01-01
Study Objectives: There is debate in dream research as to whether ponto-geniculo-occipital (PGO) waves or cortical arousal during sleep underlie the biological mechanisms of dreaming. This study comprised 2 experiments. As eye movements (EMs) are currently considered the best noninvasive indicator of PGO burst activity in humans, the aim of the first experiment was to investigate the effect of low-intensity repeated auditory stimulation on EMs (and inferred PGO burst activity) during REM sleep. It was predicted that such auditory stimuli during REM sleep would have a suppressive effect on EMs. The aim of the second experiment was to examine the effects of this auditory stimulation on subsequent dream reporting on awakening. Design: Repeated measures design with counterbalanced order of experimental and control conditions across participants. Setting: Sleep laboratory based polysomnography (PSG) Participants: Experiment 1: 5 males and 10 females aged 18-35 years (M = 20.8, SD = 5.4). Experiment 2: 7 males and 13 females aged 18-35 years (M = 23.3, SD = 5.5). Interventions: Below-waking threshold tone presentations during REM sleep compared to control REM sleep conditions without tone presentations. Measurements and Results: PSG records were manually scored for sleep stages, EEG arousals, and EMs. Auditory stimulation during REM sleep was related to: (a) an increase in EEG arousal, (b) a decrease in the amplitude and frequency of EMs, and (c) a decrease in the frequency of visual imagery reports on awakening. Conclusions: The results of this study provide phenomenological support for PGO-based theories of dream reporting on awakening from sleep in humans. Citation: Stuart K; Conduit R. Auditory inhibition of rapid eye movements and dream recall from REM sleep. SLEEP 2009;32(3):399–408. PMID:19294960
Arakaki, Xianghong; Galbraith, Gary; Pikov, Victor; Fonteh, Alfred N.; Harrington, Michael G.
2014-01-01
Migraine symptoms often include auditory discomfort. Nitroglycerin (NTG)-triggered central sensitization (CS) provides a rodent model of migraine, but auditory brainstem pathways have not yet been studied in this example. Our objective was to examine brainstem auditory evoked potentials (BAEPs) in rat CS as a measure of possible auditory abnormalities. We used four subdermal electrodes to record horizontal (h) and vertical (v) dipole channel BAEPs before and after injection of NTG or saline. We measured the peak latencies (PLs), interpeak latencies (IPLs), and amplitudes for detectable waveforms evoked by 8, 16, or 32 KHz auditory stimulation. At 8 KHz stimulation, vertical channel positive PLs of waves 4, 5, and 6 (vP4, vP5, and vP6), and related IPLs from earlier negative or positive peaks (vN1-vP4, vN1-vP5, vN1-vP6; vP3-vP4, vP3-vP6) increased significantly 2 hours after NTG injection compared to the saline group. However, BAEP peak amplitudes at all frequencies, PLs and IPLs from the horizontal channel at all frequencies, and the vertical channel stimulated at 16 and 32 KHz showed no significant/consistent change. For the first time in the rat CS model, we show that BAEP PLs and IPLs ranging from putative bilateral medial superior olivary nuclei (P4) to the more rostral structures such as the medial geniculate body (P6) were prolonged 2 hours after NTG administration. These BAEP alterations could reflect changes in neurotransmitters and/or hypoperfusion in the midbrain. The similarity of our results with previous human studies further validates the rodent CS model for future migraine research. PMID:24680742
Prentice, Jennifer R; Blackwell, Christopher S; Raoof, Naz; Bacon, Paul; Ray, Jaydip; Hickman, Simon J; Wilkinson, J Mark
2014-01-01
Case reports of patients with mal-functioning metal-on-metal hip replacement (MoMHR) prostheses suggest an association of elevated circulating metal levels with visual and auditory dysfunction. However, it is unknown if this is a cumulative exposure effect and the impact of prolonged low level exposure, relevant to the majority of patients with a well-functioning prosthesis, has not been studied. Twenty four male patients with a well-functioning MoMHR and an age and time since surgery matched group of 24 male patients with conventional total hip arthroplasty (THA) underwent clinical and electrophysiological assessment of their visual and auditory health at a mean of ten years after surgery. Median circulating cobalt and chromium concentrations were higher in patients after MoMHR versus those with THA (P<0.0001), but were within the Medicines and Healthcare Products Regulatory Agency (UK) investigation threshold. Subjective auditory tests including pure tone audiometric and speech discrimination findings were similar between groups (P>0.05). Objective assessments, including amplitude and signal-to-noise ratio of transient evoked and distortion product oto-acoustic emissions (TEOAE and DPOAE, respectively), were similar for all the frequencies tested (P>0.05). Auditory brainstem responses (ABR) and cortical evoked response audiometry (ACR) were also similar between groups (P>0.05). Ophthalmological evaluations, including self-reported visual function by visual functioning questionnaire, as well as binocular low contrast visual acuity and colour vision were similar between groups (P>0.05). Retinal nerve fibre layer thickness and macular volume measured by optical coherence tomography were also similar between groups (P>0.05). In the presence of moderately elevated metal levels associated with well-functioning implants, MoMHR exposure does not associate with clinically demonstrable visual or auditory dysfunction.
The Listening Cube: A Three Dimensional Auditory Training Program
Ilona, Anderson; Marleen, Bammens; Josepha, Jans; Marianne, Haesevoets; Ria, Pans; Hilde, Vandistel; Yvette, Vrolix
2012-01-01
Objectives Here we present the Listening Cube, an auditory training program for children and adults receiving cochlear implants, developed during the clinical practice at the KIDS Royal Institute for the Deaf in Belgium. We provide information on the content of the program as well as guidance as to how to use it. Methods The Listening Cube is a three-dimensional auditory training model that takes the following into consideration: the sequence of auditory listening skills to be trained, the variety of materials to be used, and the range of listening environments to be considered. During auditory therapy, it is important to develop training protocols and materials to provide rapid improvement over a relatively short time period. Moreover, effectiveness and the general real-life applicability of these protocols to various users should be determined. Results Because this publication is not a research article, but comes out of good daily practice, we cannot state the main results of this study. We can only say that this auditory training model is very successful. Since the first report was published in the Dutch language in 2003, more than 200 therapists in Belgium and the Netherlands followed a training course elected to implement the Listening Cube in their daily practice with children and adults with a hearing loss, especially in those wearing cochlear implants. Conclusion The Listening Cube is a tool to aid in planning therapeutic sessions created to meet individual needs, which is often challenging. The three dimensions of the cube are levels of perception, practice material, and practice conditions. These dimensions can serve as a visual reminder of the task analysis and of other considerations that play a role in structuring therapy sessions. PMID:22701766
Modality-specificity of Selective Attention Networks
Stewart, Hannah J.; Amitay, Sygal
2015-01-01
Objective: To establish the modality specificity and generality of selective attention networks. Method: Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. Results: The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled “general attention.” The third component was labeled “auditory attention,” as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as “spatial orienting” and “spatial conflict,” respectively—they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task—all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). Conclusions: These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific. PMID:26635709
Opposite effects of fear conditioning and extinction on dendritic spine remodelling.
Lai, Cora Sau Wan; Franke, Thomas F; Gan, Wen-Biao
2012-02-19
It is generally believed that fear extinction is a form of new learning that inhibits rather than erases previously acquired fear memories. Although this view has gained much support from behavioural and electrophysiological studies, the hypothesis that extinction causes the partial erasure of fear memories remains viable. Using transcranial two-photon microscopy, we investigated how neural circuits are modified by fear learning and extinction by examining the formation and elimination of postsynaptic dendritic spines of layer-V pyramidal neurons in the mouse frontal association cortex. Here we show that fear conditioning by pairing an auditory cue with a footshock increases the rate of spine elimination. By contrast, fear extinction by repeated presentation of the same auditory cue without a footshock increases the rate of spine formation. The degrees of spine remodelling induced by fear conditioning and extinction strongly correlate with the expression and extinction of conditioned fear responses, respectively. Notably, spine elimination and formation induced by fear conditioning and extinction occur on the same dendritic branches in a cue- and location-specific manner: cue-specific extinction causes formation of dendritic spines within a distance of two micrometres from spines that were eliminated after fear conditioning. Furthermore, reconditioning preferentially induces elimination of dendritic spines that were formed after extinction. Thus, within vastly complex neuronal networks, fear conditioning, extinction and reconditioning lead to opposing changes at the level of individual synapses. These findings also suggest that fear memory traces are partially erased after extinction.
Probing sensorimotor integration during musical performance.
Furuya, Shinichi; Furukawa, Yuta; Uehara, Kazumasa; Oku, Takanori
2018-03-10
An integration of afferent sensory information from the visual, auditory, and proprioceptive systems into execution and update of motor programs plays crucial roles in control and acquisition of skillful sequential movements in musical performance. However, conventional behavioral and neurophysiological techniques that have been applied to study simplistic motor behaviors limit elucidating online sensorimotor integration processes underlying skillful musical performance. Here, we propose two novel techniques that were developed to investigate the roles of auditory and proprioceptive feedback in piano performance. First, a closed-loop noninvasive brain stimulation system that consists of transcranial magnetic stimulation, a motion sensor, and a microcomputer enabled to assess time-varying cortical processes subserving auditory-motor integration during piano playing. Second, a force-field system capable of manipulating the weight of a piano key allowed for characterizing movement adaptation based on the feedback obtained, which can shed light on the formation of an internal representation of the piano. Results of neurophysiological and psychophysics experiments provided evidence validating these systems as effective means for disentangling computational and neural processes of sensorimotor integration in musical performance. © 2018 New York Academy of Sciences.
Neuropsychological implications of selective attentional functioning in psychopathic offenders.
Mayer, Andrew R; Kosson, David S; Bedrick, Edward J
2006-09-01
Several core characteristics of the psychopathic personality disorder (i.e., impulsivity, failure to attend to interpersonal cues) suggest that psychopaths suffer from disordered attention. However, there is mixed evidence from the cognitive literature as to whether they exhibit superior or deficient selective attention, which has led to the formation of several distinct theories of attentional functioning in psychopathy. The present experiment investigated participants' abilities to purposely allocate attentional resources on the basis of auditory or visual linguistic information and directly tested both theories of deficient or superior selective attention in psychopathy. Specifically, 91 male inmates at a county jail were presented with either auditory or visual linguistic cues (with and without distractors) that correctly indicated the position of an upcoming visual target in 75% of the trials. The results indicated that psychopaths did not exhibit evidence of superior selective attention in any of the conditions but were generally less efficient in shifting attention on the basis of linguistic cues, especially in regard to auditory information. Implications for understanding psychopaths' cognitive functioning and possible neuropsychological deficits are addressed. ((c) 2006 APA, all rights reserved).
2018-01-01
This study tested the hypothesis that object-based attention modulates the discrimination of level increments in stop-consonant noise bursts. With consonant-vowel-consonant (CvC) words consisting of an ≈80-dB vowel (v), a pre-vocalic (Cv) and a post-vocalic (vC) stop-consonant noise burst (≈60-dB SPL), we measured discrimination thresholds (LDTs) for level increments (ΔL) in the noise bursts presented either in CvC context or in isolation. In the 2-interval 2-alternative forced-choice task, each observation interval presented a CvC word (e.g., /pæk/ /pæk/), and normal-hearing participants had to discern ΔL in the Cv or vC burst. Based on the linguistic word labels, the auditory events of each trial were perceived as two auditory objects (Cv-v-vC and Cv-v-vC) that group together the bursts and vowels, hindering selective attention to ΔL. To discern ΔL in Cv or vC, the events must be reorganized into three auditory objects: the to-be-attended pre-vocalic (Cv–Cv) or post-vocalic burst pair (vC–vC), and the to-be-ignored vowel pair (v–v). Our results suggest that instead of being automatic this reorganization requires training, in spite of using familiar CvC words. Relative to bursts in isolation, bursts in context always produced inferior ΔL discrimination accuracy (a context effect), which depended strongly on the acoustic separation between the bursts and the vowel, being much keener for the object apart from (post-vocalic) than for the object adjoining (pre-vocalic) the vowel (a temporal-position effect). Variability in CvC dimensions that did not alter the noise-burst perceptual grouping had minor effects on discrimination accuracy. In addition to being robust and persistent, these effects are relatively general, evincing in forced-choice tasks with one or two observation intervals, with or without variability in the temporal position of ΔL, and with either fixed or roving CvC standards. The results lend support to the hypothesis. PMID:29364931
Chiang, Hsueh-Sheng; Eroh, Justin; Spence, Jeffrey S; Motes, Michael A; Maguire, Mandy J; Krawczyk, Daniel C; Brier, Matthew R; Hart, John; Kraut, Michael A
2016-08-01
How the brain combines the neural representations of features that comprise an object in order to activate a coherent object memory is poorly understood, especially when the features are presented in different modalities (visual vs. auditory) and domains (verbal vs. nonverbal). We examined this question using three versions of a modified Semantic Object Retrieval Test, where object memory was probed by a feature presented as a written word, a spoken word, or a picture, followed by a second feature always presented as a visual word. Participants indicated whether each feature pair elicited retrieval of the memory of a particular object. Sixteen subjects completed one of the three versions (N=48 in total) while their EEG were recorded simultaneously. We analyzed EEG data in four separate frequency bands (delta: 1-4Hz, theta: 4-7Hz; alpha: 8-12Hz; beta: 13-19Hz) using a multivariate data-driven approach. We found that alpha power time-locked to response was modulated by both cross-modality (visual vs. auditory) and cross-domain (verbal vs. nonverbal) probing of semantic object memory. In addition, retrieval trials showed greater changes in all frequency bands compared to non-retrieval trials across all stimulus types in both response-locked and stimulus-locked analyses, suggesting dissociable neural subcomponents involved in binding object features to retrieve a memory. We conclude that these findings support both modality/domain-dependent and modality/domain-independent mechanisms during semantic object memory retrieval. Copyright © 2016 Elsevier B.V. All rights reserved.
Analysis of the relationship between cognitive skills and unilateral sensory hearing loss.
Calderón-Leyva, I; Díaz-Leines, S; Arch-Tirado, E; Lino-González, A L
2018-06-01
To analyse cognitive skills in patients with severe unilateral hearing loss versus those in subjects with normal hearing. 40 adults participated: 20 patients (10 women and 10 men) with severe unilateral hearing loss and 20 healthy subjects matched to the study group. Cognitive abilities were measured with the Spanish version of the Woodcock Johnson Battery-Revised; central auditory processing was assessed with monaural psychoacoustic tests. Box plots were drawn and t tests were performed for samples with a significance of P≤.05. A comparison of performances on the filtered word testing and time-compressed disyllabic word tests between patients and controls revealed a statistically significant difference (P≤.05) with greater variability among responses by hearing impaired subjects. This same group also showed a better cognitive performance on the numbers reversed, visual auditory learning, analysis synthesis, concept formation, and incomplete words tests. Patients with hearing loss performed more poorly than controls on the filtered word and time-compressed disyllabic word tests, but more competently on memory, reasoning, and auditory processing tasks. Complementary tests, such as those assessing central auditory processes and cognitive ability tests, are important and helpful for designing habilitation/rehabilitation and therapeutic strategies intended to optimise and stimulate cognitive skills in subjects with unilateral hearing impairment. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
Newborn infants perceive abstract numbers
Izard, Véronique; Sann, Coralie; Spelke, Elizabeth S.; Streri, Arlette
2009-01-01
Although infants and animals respond to the approximate number of elements in visual, auditory, and tactile arrays, only human children and adults have been shown to possess abstract numerical representations that apply to entities of all kinds (e.g., 7 samurai, seas, or sins). Do abstract numerical concepts depend on language or culture, or do they form a part of humans' innate, core knowledge? Here we show that newborn infants spontaneously associate stationary, visual-spatial arrays of 4–18 objects with auditory sequences of events on the basis of number. Their performance provides evidence for abstract numerical representations at the start of postnatal experience. PMID:19520833
Wahab, Suzaily; Abdul Rahman, Abdul Hamid; Sidek, Dinsuhaimi; Zakaria, Mohd. Normani
2016-01-01
Objective Electrophysiological studies, which are mostly focused on afferent pathway, have proven that auditory processing deficits exist in patients with schizophrenia. Nevertheless, reports on the suppressive effect of efferent auditory pathway on cochlear outer hair cells among schizophrenia patients are limited. The present, case-control, study examined the contralateral suppression of transient evoked otoacoustic emissions (TEOAEs) in patients with schizophrenia. Methods Participants were twenty-three healthy controls and sixteen schizophrenia patients with normal hearing, middle ear and cochlear outer hair cells function. Absolute non-linear and linear TEOAEs were measured in both ears by delivering clicks stimuli at 80 dB SPL and 60 dB SPL respectively. Subsequently, contralateral suppression was determined by subtracting the absolute TEOAEs response obtained at 60 dBpe SPL during the absence and presence of contralateral white noise delivered at 65 dB HL. No attention tasks were conducted during measurements. Results We found no significant difference in absolute TEOAEs responses at 80 dB SPL, in either diagnosis or ear groups (p>0.05). However, the overall contralateral suppression was significantly larger in schizophrenia patients (p<0.05). Specifically, patients with schizophrenia demonstrated significantly increased right ear contralateral suppression compared to healthy control (p<0.05). Conclusion The present findings suggest increased inhibitory effect of efferent auditory pathway especially on the right cochlear outer hair cells. Further studies to investigate increased suppressive effects are crucial to expand the current understanding of auditory hallucination mechanisms in schizophrenia patients. PMID:26766950
Binaural auditory beats affect vigilance performance and mood.
Lane, J D; Kasian, S J; Owens, J E; Marsh, G R
1998-01-01
When two tones of slightly different frequency are presented separately to the left and right ears the listener perceives a single tone that varies in amplitude at a frequency equal to the frequency difference between the two tones, a perceptual phenomenon known as the binaural auditory beat. Anecdotal reports suggest that binaural auditory beats within the electroencephalograph frequency range can entrain EEG activity and may affect states of consciousness, although few scientific studies have been published. This study compared the effects of binaural auditory beats in the EEG beta and EEG theta/delta frequency ranges on mood and on performance of a vigilance task to investigate their effects on subjective and objective measures of arousal. Participants (n = 29) performed a 30-min visual vigilance task on three different days while listening to pink noise containing simple tones or binaural beats either in the beta range (16 and 24 Hz) or the theta/delta range (1.5 and 4 Hz). However, participants were kept blind to the presence of binaural beats to control expectation effects. Presentation of beta-frequency binaural beats yielded more correct target detections and fewer false alarms than presentation of theta/delta frequency binaural beats. In addition, the beta-frequency beats were associated with less negative mood. Results suggest that the presentation of binaural auditory beats can affect psychomotor performance and mood. This technology may have applications for the control of attention and arousal and the enhancement of human performance.
Auditory displays as occasion setters.
Mckeown, Denis; Isherwood, Sarah; Conway, Gareth
2010-02-01
The aim of this study was to evaluate whether representational sounds that capture the richness of experience of a collision enhance performance in braking to avoid a collision relative to other forms of warnings in a driving simulator. There is increasing interest in auditory warnings that are informative about their referents. But as well as providing information about some intended object, warnings may be designed to set the occasion for a rich body of information about the outcomes of behavior in a particular context. These richly informative warnings may offer performance advantages, as they may be rapidly processed by users. An auditory occasion setter for a collision (a recording of screeching brakes indicating imminent collision) was compared with two other auditory warnings (an abstract and an "environmental" sound), a speech message, a visual display, and no warning in a fixed-base driving simulator as interfaces to a collision avoidance system. The main measure was braking response times at each of two headways (1.5 s and 3 s) to a lead vehicle. The occasion setter demonstrated statistically significantly faster braking responses at each headway in 8 out of 10 comparisons (with braking responses equally fast to the abstract warning at 1.5 s and the environmental warning at 3 s). Auditory displays that set the occasion for an outcome in a particular setting and for particular behaviors may offer small but critical performance enhancements in time-critical applications. The occasion setter could be applied in settings where speed of response by users is of the essence.
Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex123
Micheyl, Christophe; Steinschneider, Mitchell
2016-01-01
Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198
Audition and exhibition to toluene - a contribution for the theme
Augusto, Lívia Sanches Calvi; Kulay, Luiz Alexandre; Franco, Eloisa Sartori
2012-01-01
Summary Introduction: With the technological advances and the changes in the productive processes, the workers are displayed the different physical and chemical agents in its labor environment. The toluene is solvent an organic gift in glues, inks, oils, amongst others. Objective: To compare solvent the literary findings that evidence that diligent displayed simultaneously the noise and they have greater probability to develop an auditory loss of peripheral origin. Method: Revision of literature regarding the occupational auditory loss in displayed workers the noise and toluene. Results: The isolated exposition to the toluene also can unchain an alteration of the auditory thresholds. These audiometric findings, for ototoxicity the exposition to the toluene, present similar audiograms to the one for exposition to the noise, what it becomes difficult to differentiate a audiometric result of agreed exposition - noise and toluene - and exposition only to the noise. Conclusion: The majority of the studies was projected to generate hypotheses and would have to be considered as preliminary steps of an additional research. Until today the agents in the environment of work and its effect they have been studied in isolated way and the limits of tolerance of these, do not consider the agreed expositions. Considering that the workers are displayed the multiples agent and that the auditory loss is irreversible, the implemented tests must be more complete and all the workers must be part of the program of auditory prevention exactly displayed the low doses of the recommended limit of exposition. PMID:25991943
The auditory scene: an fMRI study on melody and accompaniment in professional pianists.
Spada, Danilo; Verga, Laura; Iadanza, Antonella; Tettamanti, Marco; Perani, Daniela
2014-11-15
The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes. Copyright © 2014 Elsevier Inc. All rights reserved.
Hidden Hearing Injury: The Emerging Science and Military Relevance of Cochlear Synaptopathy.
Tepe, Victoria; Smalt, Christopher; Nelson, Jeremy; Quatieri, Thomas; Pitts, Kenneth
2017-09-01
The phenomenon recently described as "hidden hearing loss" was the subject of a meeting co-hosted by the Department of Defense Hearing Center of Excellence and MIT Lincoln Laboratory to consider the potential relevance of noise-related synaptopathic injury to military settings and performance, service-related injury scenarios, and military medical priorities. Participants included approximately 50 researchers and subject matter experts from academic, federal, and military laboratories. Here we present a synthesis of discussion topics and concerns, as well as specific research objectives identified to develop militarily relevant knowledge. We consider findings from studies to date that have demonstrated cochlear synaptopathy and neurodegenerative processes apparently linked to noise exposure in animal models. We explore the potential relevance of these findings to the prediction and prevention of military hearing injuries, and to comorbid injuries in the neurological domain. Noise-induced cochlear synaptopathic injury is not detected by conventional audiometric assessment of threshold sensitivity. Animal studies suggest there may be a generous window of opportunity for intervention to mitigate or prevent cochlear neurodegenerative processes, e.g., by administration of neurotrophins or antioxidants. However, it is not yet known if the mechanisms that underlie "hidden hearing loss" also occur in human beings or, if so, how to identify them early, and how and when to intervene. Neurological injuries resulting from noise exposures via the auditory system have potentially significant implications for military Service Member performance, long-term Veteran health, and noise exposure standards. Mediated via auditory pathways, such injuries have possible relationship to clinical impairments including speech perception, and may be a largely overlooked contributor to cognitive symptoms associated with other military service-related injuries such as blast exposure and brain trauma. The potential health and performance consequences of noise-induced cochlear synaptopathic injury are easily overlooked, especially if it is assumed that hearing threshold sensitivity loss is the major concern. There should be a renewed impetus to further characterize and model synaptopathic mechanisms of auditory injury; study its potential impact on human auditory function, cognition, and performance metrics of military relevance; and develop solutions for auditory protection (including noise dosimetry) and treatment if appropriate following noise or blast exposure in military scenarios. We identify specific problems, solution objectives, and research objectives. Recommended research calls for a multidisciplinary approach to address cochlear nerve synaptopathy, central (brain) dysfunction, noise exposure measurement and metrics, and clinical assessment. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
Deshpande, Aniruddha K; Tan, Lirong; Lu, Long J; Altaye, Mekibib; Holland, Scott K
2018-05-01
The trends in cochlear implantation candidacy and benefit have changed rapidly in the last two decades. It is now widely accepted that early implantation leads to better postimplant outcomes. Although some generalizations can be made about postimplant auditory and language performance, neural mechanisms need to be studied to predict individual prognosis. The aim of this study was to use functional magnetic resonance imaging (fMRI) to identify preimplant neuroimaging biomarkers that predict children's postimplant auditory and language outcomes as measured by parental observation/reports. This is a pre-post correlational measures study. Twelve possible cochlear implant candidates with bilateral severe to profound hearing loss were recruited via referrals for a clinical magnetic resonance imaging to ensure structural integrity of the auditory nerve for implantation. Participants underwent cochlear implantation at a mean age of 19.4 mo. All children used the advanced combination encoder strategy (ACE, Cochlear Corporation™, Nucleus ® Freedom cochlear implants). Three participants received an implant in the right ear; one in the left ear whereas eight participants received bilateral implants. Participants' preimplant neuronal activation in response to two auditory stimuli was studied using an event-related fMRI method. Blood oxygen level dependent contrast maps were calculated for speech and noise stimuli. The general linear model was used to create z-maps. The Auditory Skills Checklist (ASC) and the SKI-HI Language Development Scale (SKI-HI LDS) were administered to the parents 2 yr after implantation. A nonparametric correlation analysis was implemented between preimplant fMRI activation and postimplant auditory and language outcomes based on ASC and SKI-HI LDS. Statistical Parametric Mapping software was used to create regression maps between fMRI activation and scores on the aforementioned tests. Regression maps were overlaid on the Imaging Research Center infant template and visualized in MRIcro. Regression maps revealed two clusters of brain activation for the speech versus silence contrast and five clusters for the noise versus silence contrast that were significantly correlated with the parental reports. These clusters included auditory and extra-auditory regions such as the middle temporal gyrus, supramarginal gyrus, precuneus, cingulate gyrus, middle frontal gyrus, subgyral, and middle occipital gyrus. Both positive and negative correlations were observed. Correlation values for the different clusters ranged from -0.90 to 0.95 and were significant at a corrected p value of <0.05. Correlations suggest that postimplant performance may be predicted by activation in specific brain regions. The results of the present study suggest that (1) fMRI can be used to identify neuroimaging biomarkers of auditory and language performance before implantation and (2) activation in certain brain regions may be predictive of postimplant auditory and language performance as measured by parental observation/reports. American Academy of Audiology.
Wu, Chunxiao; Huang, Lexing; Tan, Hui; Wang, Yanting; Zheng, Hongyi; Kong, Lingmei; Zheng, Wenbin
2016-05-15
Our objective was to evaluate age-dependent changes in microstructure and metabolism in the auditory neural pathway, of children with profound sensorineural hearing loss (SNHL), and to differentiate between good and poor surgical outcome cochlear implantation (CI) patients by using diffusion tensor imaging (DTI) and magnetic resonance spectroscopy (MRS). Ninety-two SNHL children (49 males, 43 females; mean age, 4.9 years) were studied by conventional MR imaging, DTI and MRS. Patients were divided into three groups: Group A consisted of children≤1 years old (n=20), Group B consisted of children 1-3 years old (n=31), and group C consisted of children 3-14 years old (n=41). Among the 31 patients (19 males and 12 females, 12m- 14y ) with CI, 18 patients (mean age 4.8±0.7 years) with a categories of auditory performance (CAP) score over five were classified into the good outcome group and 13 patients (mean age, 4.4±0.7 years) with a CAP score below five were classified into the poor outcome group. Two DTI parameters, fractional anisotropy (FA) and apparent diffusion coefficient (ADC), were measured in the superior temporal gyrus (STG) and auditory radiation. Regions of interest for metabolic change measurements were located inside the STG. DTI values were measured based on region-of-interest analysis and MRS values for correlation analysis with CAP scores. Compared with healthy individuals, 92 SNHL patients displayed decreased FA values in the auditory radiation and STG (p<0.05). Only decreased FA values in the auditory radiation was observed in Group A. Decreased FA values in the auditory radiation and STG were both observed in B and C groups. However, in Group C, the N-acetyl aspartate/creatinine ratio in the STG was also significantly decreased (p<0.05). Correlation analyses at 12 months post-operation revealed strong correlations between the FA, in the auditory radiation, and CAP scores (r=0.793, p<0.01). DTI and MRS can be used to evaluate microstructural alterations and metabolite concentration changes in the auditory neural pathway that are not detectable by conventional MR imaging. The observed changes in FA suggest that children with SNHL have a developmental delay in myelination in the auditory neural pathway, and it also display greater metabolite concentration changes in the auditory cortex in older children, suggest that early cochlear implantation might be more effective in restoring hearing in children with SNHL. This article is part of a Special Issue entitled SI: Brain and Memory. Copyright © 2014 Elsevier B.V. All rights reserved.
Hsu, Ruey-Fen; Ho, Chi-Kung; Lu, Sheng-Nan; Chen, Shun-Sheng
2010-10-01
An objective investigation is needed to verify the existence and severity of hearing impairments resulting from work-related, noise-induced hearing loss in arbitration of medicolegal aspects. We investigated the accuracy of multiple-frequency auditory steady-state responses (Mf-ASSRs) between subjects with sensorineural hearing loss (SNHL) with and without occupational noise exposure. Cross-sectional study. Tertiary referral medical centre. Pure-tone audiometry and Mf-ASSRs were recorded in 88 subjects (34 patients had occupational noise-induced hearing loss [NIHL], 36 patients had SNHL without noise exposure, and 18 volunteers were normal controls). Inter- and intragroup comparisons were made. A predicting equation was derived using multiple linear regression analysis. ASSRs and pure-tone thresholds (PTTs) showed a strong correlation for all subjects (r = .77 ≈ .94). The relationship is demonstrated by the equationThe differences between the ASSR and PTT were significantly higher for the NIHL group than for the subjects with non-noise-induced SNHL (p < .001). Mf-ASSR is a promising tool for objectively evaluating hearing thresholds. Predictive value may be lower in subjects with occupational hearing loss. Regardless of carrier frequencies, the severity of hearing loss affects the steady-state response. Moreover, the ASSR may assist in detecting noise-induced injury of the auditory pathway. A multiple linear regression equation to accurately predict thresholds was shown that takes into consideration all effect factors.
Pollonini, Luca; Olds, Cristen; Abaya, Homer; Bortfeld, Heather; Beauchamp, Michael S; Oghalai, John S
2014-03-01
The primary goal of most cochlear implant procedures is to improve a patient's ability to discriminate speech. To accomplish this, cochlear implants are programmed so as to maximize speech understanding. However, programming a cochlear implant can be an iterative, labor-intensive process that takes place over months. In this study, we sought to determine whether functional near-infrared spectroscopy (fNIRS), a non-invasive neuroimaging method which is safe to use repeatedly and for extended periods of time, can provide an objective measure of whether a subject is hearing normal speech or distorted speech. We used a 140 channel fNIRS system to measure activation within the auditory cortex in 19 normal hearing subjects while they listed to speech with different levels of intelligibility. Custom software was developed to analyze the data and compute topographic maps from the measured changes in oxyhemoglobin and deoxyhemoglobin concentration. Normal speech reliably evoked the strongest responses within the auditory cortex. Distorted speech produced less region-specific cortical activation. Environmental sounds were used as a control, and they produced the least cortical activation. These data collected using fNIRS are consistent with the fMRI literature and thus demonstrate the feasibility of using this technique to objectively detect differences in cortical responses to speech of different intelligibility. Copyright © 2013 Elsevier B.V. All rights reserved.
Domain-specific impairment of source memory following a right posterior medial temporal lobe lesion.
Peters, Jan; Koch, Benno; Schwarz, Michael; Daum, Irene
2007-01-01
This single case analysis of memory performance in a patient with an ischemic lesion affecting posterior but not anterior right medial temporal lobe (MTL) indicates that source memory can be disrupted in a domain-specific manner. The patient showed normal recognition memory for gray-scale photos of objects (visual condition) and spoken words (auditory condition). While memory for visual source (texture/color of the background against which pictures appeared) was within the normal range, auditory source memory (male/female speaker voice) was at chance level, a performance pattern significantly different from the control group. This dissociation is consistent with recent fMRI evidence of anterior/posterior MTL dissociations depending upon the nature of source information (visual texture/color vs. auditory speaker voice). The findings are in good agreement with the view of dissociable memory processing by the perirhinal cortex (anterior MTL) and parahippocampal cortex (posterior MTL), depending upon the neocortical input that these regions receive. (c) 2007 Wiley-Liss, Inc.
Word learning in deaf children with cochlear implants: effects of early auditory experience.
Houston, Derek M; Stewart, Jessica; Moberly, Aaron; Hollich, George; Miyamoto, Richard T
2012-05-01
Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this task at approximately 18 months of age and older. For deaf children, performance on this task was significantly correlated with early auditory experience: Children whose cochlear implants were switched on by 14 months of age or who had relatively more hearing before implantation demonstrated learning in this task, but later implanted profoundly deaf children did not. Performance on this task also correlated with later measures of vocabulary size. Taken together, these findings suggest that early auditory experience facilitates word learning and that the IPLP may be useful for identifying children who may be at high risk for poor vocabulary development. © 2012 Blackwell Publishing Ltd.
Cognitive/emotional models for human behavior representation in 3D avatar simulations
NASA Astrophysics Data System (ADS)
Peterson, James K.
2004-08-01
Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.
Rathee, Ruchika; Luhrmann, Tanya M; Bhatia, Triptish; Deshpande, Smita N
2018-01-01
Poor cognitive insight in schizophrenia has been linked to delusions, hallucinations, and negative symptoms as well as to depressive/anxiety symptoms. Its impact on quality of life has been less studied, especially in schizophrenia subjects with ongoing auditory hallucinations. The Beck Cognitive Insight Scale (BCIS) and the Quality of Life Scale (QLS) were administered to subjects who met DSM IV criteria for schizophrenia after due translation and validation. All subjects reported ongoing auditory hallucinations at recruitment. Mean composite cognitive insight scores from participants (N = 60) (2.97 ± 2.649) were in the lower range as compared to published literature. Cognitive insight scores as well as self-reflectiveness subscale scores, but not self-certainty scores, correlated significantly with the QLS scores p < 0.001. Results suggest that better cognitive insight, especially self-reflectiveness, may be linked to better quality of life. Self-reflectiveness could be a useful construct to address in psychotherapy to improve rehabilitation. Copyright © 2017. Published by Elsevier B.V.
The effects of aging on the working memory processes of multimodal information.
Solesio-Jofre, Elena; López-Frutos, José María; Cashdollar, Nathan; Aurtenetxe, Sara; de Ramón, Ignacio; Maestú, Fernando
2017-05-01
Normal aging is associated with deficits in working memory processes. However, the majority of research has focused on storage or inhibitory processes using unimodal paradigms, without addressing their relationships using different sensory modalities. Hence, we pursued two objectives. First, was to examine the effects of aging on storage and inhibitory processes. Second, was to evaluate aging effects on multisensory integration of visual and auditory stimuli. To this end, young and older participants performed a multimodal task for visual and auditory pairs of stimuli with increasing memory load at encoding and interference during retention. Our results showed an age-related increased vulnerability to interrupting and distracting interference reflecting inhibitory deficits related to the off-line reactivation and on-line suppression of relevant and irrelevant information, respectively. Storage capacity was impaired with increasing task demands in both age groups. Additionally, older adults showed a deficit in multisensory integration, with poorer performance for new visual compared to new auditory information.
Word learning in deaf children with cochlear implants: effects of early auditory experience
Houston, Derek M.; Stewart, Jessica; Moberly, Aaron; Hollich, George; Miyamoto, Richard T.
2013-01-01
Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this task at approximately 18 months of age and older. For deaf children, performance on this task was significantly correlated with early auditory experience: Children whose cochlear implants were switched on by 14 months of age or who had relatively more hearing before implantation demonstrated learning in this task, but later implanted profoundly deaf children did not. Performance on this task also correlated with later measures of vocabulary size. Taken together, these findings suggest that early auditory experience facilitates word learning and that the IPLP may be useful for identifying children who may be at high risk for poor vocabulary development. PMID:22490184
Music supported therapy promotes motor plasticity in individuals with chronic stroke.
Ripollés, P; Rojo, N; Grau-Sánchez, J; Amengual, J L; Càmara, E; Marco-Pallarés, J; Juncadella, M; Vaquero, L; Rubio, F; Duarte, E; Garrido, C; Altenmüller, E; Münte, T F; Rodríguez-Fornells, A
2016-12-01
Novel rehabilitation interventions have improved motor recovery by induction of neural plasticity in individuals with stroke. Of these, Music-supported therapy (MST) is based on music training designed to restore motor deficits. Music training requires multimodal processing, involving the integration and co-operation of visual, motor, auditory, affective and cognitive systems. The main objective of this study was to assess, in a group of 20 individuals suffering from chronic stroke, the motor, cognitive, emotional and neuroplastic effects of MST. Using functional magnetic resonance imaging (fMRI) we observed a clear restitution of both activity and connectivity among auditory-motor regions of the affected hemisphere. Importantly, no differences were observed in this functional network in a healthy control group, ruling out possible confounds such as repeated imaging testing. Moreover, this increase in activity and connectivity between auditory and motor regions was accompanied by a functional improvement of the paretic hand. The present results confirm MST as a viable intervention to improve motor function in chronic stroke individuals.
Basirat, Anahita; Schwartz, Jean-Luc; Sato, Marc
2012-01-01
The verbal transformation effect (VTE) refers to perceptual switches while listening to a speech sound repeated rapidly and continuously. It is a specific case of perceptual multistability providing a rich paradigm for studying the processes underlying the perceptual organization of speech. While the VTE has been mainly considered as a purely auditory effect, this paper presents a review of recent behavioural and neuroimaging studies investigating the role of perceptuo-motor interactions in the effect. Behavioural data show that articulatory constraints and visual information from the speaker's articulatory gestures can influence verbal transformations. In line with these data, functional magnetic resonance imaging and intracranial electroencephalography studies demonstrate that articulatory-based representations play a key role in the emergence and the stabilization of speech percepts during a verbal transformation task. Overall, these results suggest that perceptuo (multisensory)-motor processes are involved in the perceptual organization of speech and the formation of speech perceptual objects. PMID:22371618
Jerger, Susan; Tye-Murray, Nancy; Damian, Markus F.; Abdi, Hervé
2016-01-01
Objectives Our research determined 1) how phonological priming of picture naming was affected by the mode (auditory-visual [AV] vs auditory), fidelity (intact vs non-intact auditory onsets), and lexical status (words vs nonwords) of speech stimuli in children with prelingual sensorineural hearing impairment (CHI) vs. children with normal hearing (CNH); and 2) how the degree of hearing impairment (HI), auditory word recognition, and age influenced results in CHI. Note that some of our AV stimuli were not the traditional bimodal input but instead they consisted of an intact consonant/rhyme in the visual track coupled to a non-intact onset/rhyme in the auditory track. Example stimuli for the word bag are: 1) AV: intact visual (b/ag) coupled to non-intact auditory (−b/ag) and 2) Auditory: static face coupled to the same non-intact auditory (−b/ag). Our question was whether the intact visual speech would “restore or fill-in” the non-intact auditory speech in which case performance for the same auditory stimulus would differ depending upon the presence/absence of visual speech. Design Participants were 62 CHI and 62 CNH whose ages had a group-mean and -distribution akin to that in the CHI group. Ages ranged from 4 to 14 years. All participants met the following criteria: 1) spoke English as a native language, 2) communicated successfully aurally/orally, and 3) had no diagnosed or suspected disabilities other than HI and its accompanying verbal problems. The phonological priming of picture naming was assessed with the multi-modal picture word task. Results Both CHI and CNH showed greater phonological priming from high than low fidelity stimuli and from AV than auditory speech. These overall fidelity and mode effects did not differ in the CHI vs. CNH—thus these CHI appeared to have sufficiently well specified phonological onset representations to support priming and visual speech did not appear to be a disproportionately important source of the CHI’s phonological knowledge. Two exceptions occurred, however. First—with regard to lexical status—both the CHI and CNH showed significantly greater phonological priming from the nonwords than words, a pattern consistent with the prediction that children are more aware of phonetics-phonology content for nonwords. This overall pattern of similarity between the groups was qualified by the finding that CHI showed more nearly equal priming by the high vs. low fidelity nonwords than the CNH; in other words, the CHI were less affected by the fidelity of the auditory input for nonwords. Second, auditory word recognition—but not degree of HI or age—uniquely influenced phonological priming by the nonwords presented AV. Conclusions With minor exceptions, phonological priming in CHI and CNH showed more similarities than differences. Importantly, we documented that the addition of visual speech significantly increased phonological priming in both groups. Clinically these data support intervention programs that view visual speech as a powerful asset for developing spoken language in CHI. PMID:27438867
Nonassociative Learning Processes Determine Expression and Extinction of Conditioned Fear in Mice
ERIC Educational Resources Information Center
Kamprath, Kornelia; Wotjak, Carsten T.
2004-01-01
Freezing to a tone following auditory fear conditioning is commonly considered as a measure of the strength of the tone-shock association. The decrease in freezing on repeated nonreinforced tone presentation following conditioning, in turn, is attributed to the formation of an inhibitory association between tone and shock that leads to a…
Distinctive Roles for Amygdalar CREB in Reconsolidation and Extinction of Fear Memory
ERIC Educational Resources Information Center
Tronson, Natalie C.; Wiseman, Shari L.; Neve, Rachael L.; Nestler, Eric J.; Olausson, Peter; Taylor, Jane R.
2012-01-01
Cyclic AMP response element binding protein (CREB) plays a critical role in fear memory formation. Here we determined the role of CREB selectively within the amygdala in reconsolidation and extinction of auditory fear. Viral overexpression of the inducible cAMP early repressor (ICER) or the dominant-negative mCREB, specifically within the lateral…
Effects of the Presence of Audio and Type of Game Controller on Learning of Rhythmic Accuracy
ERIC Educational Resources Information Center
Thomas, James William
2017-01-01
"Guitar Hero III" and similar games potentially offer a vehicle for improvement of musical rhythmic accuracy with training delivered in both visual and auditory formats and by use of its novel guitar-shaped interface; however, some theories regarding multimedia learning suggest sound is a possible source of extraneous cognitive load…
ERIC Educational Resources Information Center
Erdodi, Laszlo; Lajiness-O'Neill, Renee; Schmitt, Thomas A.
2013-01-01
Visual and auditory verbal learning using a selective reminding format was studied in a mixed clinical sample of children with autism spectrum disorder (ASD) (n = 42), attention-deficit hyperactivity disorder (n = 83), velocardiofacial syndrome (n = 17) and neurotypicals (n = 38) using the Test of Memory and Learning to (1) more thoroughly…
Yildirim, Ilker; Jacobs, Robert A
2015-06-01
If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality. This phenomenon is an instance of cross-modal transfer of knowledge. Here, we study the Multisensory Hypothesis which states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations underlie cross-modal transfer of knowledge. We conducted an experiment evaluating whether people transfer sequence category knowledge across auditory and visual domains. Our experimental data clearly indicate that we do. We also developed a computational model accounting for our experimental results. Consistent with the probabilistic language of thought approach to cognitive modeling, our model formalizes multisensory representations as symbolic "computer programs" and uses Bayesian inference to learn these representations. Because the model demonstrates how the acquisition and use of amodal, multisensory representations can underlie cross-modal transfer of knowledge, and because the model accounts for subjects' experimental performances, our work lends credence to the Multisensory Hypothesis. Overall, our work suggests that people automatically extract and represent objects' and events' intrinsic properties, and use these properties to process and understand the same (and similar) objects and events when they are perceived through novel sensory modalities.
Werner, Sebastian; Noppeney, Uta
2010-08-01
Merging information from multiple senses provides a more reliable percept of our environment. Yet, little is known about where and how various sensory features are combined within the cortical hierarchy. Combining functional magnetic resonance imaging and psychophysics, we investigated the neural mechanisms underlying integration of audiovisual object features. Subjects categorized or passively perceived audiovisual object stimuli with the informativeness (i.e., degradation) of the auditory and visual modalities being manipulated factorially. Controlling for low-level integration processes, we show higher level audiovisual integration selectively in the superior temporal sulci (STS) bilaterally. The multisensory interactions were primarily subadditive and even suppressive for intact stimuli but turned into additive effects for degraded stimuli. Consistent with the inverse effectiveness principle, auditory and visual informativeness determine the profile of audiovisual integration in STS similarly to the influence of physical stimulus intensity in the superior colliculus. Importantly, when holding stimulus degradation constant, subjects' audiovisual behavioral benefit predicts their multisensory integration profile in STS: only subjects that benefit from multisensory integration exhibit superadditive interactions, while those that do not benefit show suppressive interactions. In conclusion, superadditive and subadditive integration profiles in STS are functionally relevant and related to behavioral indices of multisensory integration with superadditive interactions mediating successful audiovisual object categorization.
Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Hervé
2017-01-01
Objectives Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Methods Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/–B/aa or /–B/az). The items started with an easy-to-speechread /B/ or difficult-to-speechread /G/ onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/–B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same—as opposed to different—responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g., /–B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz—as opposed to az— responses in the audiovisual than auditory mode. Results Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/–B/aa) and more intact onset responses for nonword repetition (Baz for/–B/az). Thus visual speech altered both discrimination and identification in the CHL—to a large extent for the /B/ onsets but only minimally for the /G/ onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children’s discrimination skills (i.e., d’ analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets—even after variation due to the other variables was controlled. Conclusions These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. PMID:28167003
The interaction of acoustic and linguistic grouping cues in auditory object formation
NASA Astrophysics Data System (ADS)
Shapley, Kathy; Carrell, Thomas
2005-09-01
One of the earliest explanations for good speech intelligibility in poor listening situations was context [Miller et al., J. Exp. Psychol. 41 (1951)]. Context presumably allows listeners to group and predict speech appropriately and is known as a top-down listening strategy. Amplitude comodulation is another mechanism that has been shown to improve sentence intelligibility. Amplitude comodulation provides acoustic grouping information without changing the linguistic content of the desired signal [Carrell and Opie, Percept. Psychophys. 52 (1992); Hu and Wang, Proceedings of ICASSP-02 (2002)] and is considered a bottom-up process. The present experiment investigated how amplitude comodulation and semantic information combined to improve speech intelligibility. Sentences with high- and low-predictability word sequences [Boothroyd and Nittrouer, J. Acoust. Soc. Am. 84 (1988)] were constructed in two different formats: time-varying sinusoidal sentences (TVS) and reduced-channel sentences (RC). The stimuli were chosen because they minimally represent the traditionally defined speech cues and therefore emphasized the importance of the high-level context effects and low-level acoustic grouping cues. Results indicated that semantic information did not influence intelligibility levels of TVS and RC sentences. In addition amplitude modulation aided listeners' intelligibility scores in the TVS condition but hindered listeners' intelligibility scores in the RC condition.
Goyer, David; Fensky, Luisa; Hilverling, Anna Maria; Kurth, Stefanie; Kuenzel, Thomas
2015-05-01
In the avian nucleus magnocellularis (NM) endbulb of Held giant synapses develop from temporary bouton terminals. The molecular regulation of this process is not well understood. Furthermore, it is unknown how the postsynaptic specialization of the endbulb synapses develops. We therefore analysed expression of the postsynaptic scaffold protein PSD-95 during the transition from bouton-to-endbulb synapses. PSD-95 has been implicated in the regulation of the strength of glutamatergic synapses and could accordingly be of functional relevance for giant synapse formation. PSD-95 protein was expressed at synaptic sites in embryonic chicken auditory brainstem and upregulated between embryonic days (E)12 and E16. We applied immunofluorescence staining and confocal microscopy to quantify pre-and postsynaptic protein signals during bouton-to-endbulb transition. Giant terminal formation progressed along the tonotopic axis in NM, but was absent in low-frequency NM. We found a tonotopic gradient of postsynaptic PSD-95 signals in NM. Furthermore, PSD-95 immunosignals showed the greatest increase between E12 and E15, temporally preceding the bouton-to-endbulb transition. We then applied whole-cell electrophysiology to measure synaptic currents elicited by synaptic terminals during bouton-to-endbulb transition. With progressing endbulb formation postsynaptic currents rose more rapidly and synapses were less susceptible to short-term depression, but currents were not different in amplitude or decay-time constant. We conclude that development of presynaptic specializations follows postsynaptic development and speculate that the early PSD-95 increase could play a functional role in endbulb formation. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Rand, Kristina M.; Creem-Regehr, Sarah H.; Thompson, William B.
2015-01-01
The ability to navigate without getting lost is an important aspect of quality of life. In five studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low-vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2 participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared to degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared to degraded vision. With degraded vision, auditory task performance was better when guided compared to unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing. PMID:25706766
Scarfe, Amy C.; Moore, Brian C. J.; Pardhan, Shahina
2017-01-01
Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound. PMID:28407000
Processing of frequency-modulated sounds in the lateral auditory belt cortex of the rhesus monkey.
Tian, Biao; Rauschecker, Josef P
2004-11-01
Single neurons were recorded from the lateral belt areas, anterolateral (AL), mediolateral (ML), and caudolateral (CL), of nonprimary auditory cortex in 4 adult rhesus monkeys under gas anesthesia, while the neurons were stimulated with frequency-modulated (FM) sweeps. Responses to FM sweeps, measured as the firing rate of the neurons, were invariably greater than those to tone bursts. In our stimuli, frequency changed linearly from low to high frequencies (FM direction "up") or high to low frequencies ("down") at varying speeds (FM rates). Neurons were highly selective to the rate and direction of the FM sweep. Significant differences were found between the 3 lateral belt areas with regard to their FM rate preferences: whereas neurons in ML responded to the whole range of FM rates, AL neurons responded better to slower FM rates in the range of naturally occurring communication sounds. CL neurons generally responded best to fast FM rates at a speed of several hundred Hz/ms, which have the broadest frequency spectrum. These selectivities are consistent with a role of AL in the decoding of communication sounds and of CL in the localization of sounds, which works best with broader bandwidths. Together, the results support the hypothesis of parallel streams for the processing of different aspects of sounds, including auditory objects and auditory space.
Temporal plasticity in auditory cortex improves neural discrimination of speech sounds
Engineer, Crystal T.; Shetake, Jai A.; Engineer, Navzer D.; Vrana, Will A.; Wolf, Jordan T.; Kilgard, Michael P.
2017-01-01
Background Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. Objective/Hypothesis We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. Methods VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Results Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. Conclusion This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. PMID:28131520
Enhanced auditory spatial localization in blind echolocators.
Vercillo, Tiziana; Milne, Jennifer L; Gori, Monica; Goodale, Melvyn A
2015-01-01
Echolocation is the extraordinary ability to represent the external environment by using reflected sound waves from self-generated auditory pulses. Blind human expert echolocators show extremely precise spatial acuity and high accuracy in determining the shape and motion of objects by using echoes. In the current study, we investigated whether or not the use of echolocation would improve the representation of auditory space, which is severely compromised in congenitally blind individuals (Gori et al., 2014). The performance of three blind expert echolocators was compared to that of 6 blind non-echolocators and 11 sighted participants. Two tasks were performed: (1) a space bisection task in which participants judged whether the second of a sequence of three sounds was closer in space to the first or the third sound and (2) a minimum audible angle task in which participants reported which of two sounds presented successively was located more to the right. The blind non-echolocating group showed a severe impairment only in the space bisection task compared to the sighted group. Remarkably, the three blind expert echolocators performed both spatial tasks with similar or even better precision and accuracy than the sighted group. These results suggest that echolocation may improve the general sense of auditory space, most likely through a process of sensory calibration. Copyright © 2014 Elsevier Ltd. All rights reserved.
Visual and brainstem auditory evoked potentials in infants with severe vitamin B12 deficiency.
Demir, Nihat; Koç, Ahmet; Abuhandan, Mahmut; Calik, Mustafa; Işcan, Akin
2015-01-01
Vitamin B12 plays an important role in the development of mental, motor, cognitive, and social functions via its role in DNA synthesis and nerve myelination. Its deficiency in infants might cause neuromotor retardation as well as megaloblastic anemia. The objective of this study was to investigate the effects of infantile vitamin B12 deficiency on evoked brain potentials and determine whether improvement could be obtained with vitamin B12 replacement at appropriate dosages. Thirty patients with vitamin B12 deficiency and 30 age-matched healthy controls were included in the study. Hematological parameters, visual evoked potentials, and brainstem auditory evoked potentials tests were performed prior to treatment, 1 week after treatment, and 3 months after treatment. Visual evoked potentials (VEPs) and brainstem auditory evoked potentials (BAEPs) were found to be prolonged in 16 (53.3%) and 15 (50%) patients, respectively. Statistically significant improvements in VEP and BAEP examinations were determined 3 months after treatment. Three months after treatment, VEP and BAEP examinations returned to normal in 81.3% and 53.3% of subjects with prolonged VEPs and BAEPs, respectively. These results demonstrate that vitamin B12 deficiency in infants causes significant impairment in the auditory and visual functioning tests of the brain, such as VEP and BAEP.
Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina
2017-01-01
Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.
Neural Correlates of Sound Localization in Complex Acoustic Environments
Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto
2013-01-01
Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185
Speech and language disorders in children from public schools in Belo Horizonte
Rabelo, Alessandra Terra Vasconcelos; Campos, Fernanda Rodrigues; Friche, Clarice Passos; da Silva, Bárbara Suelen Vasconcelos; Friche, Amélia Augusta de Lima; Alves, Claudia Regina Lindgren; Goulart, Lúcia Maria Horta de Figueiredo
2015-01-01
Objective: To investigate the prevalence of oral language, orofacial motor skill and auditory processing disorders in children aged 4-10 years and verify their association with age and gender. Methods: Cross-sectional study with stratified, random sample consisting of 539 students. The evaluation consisted of three protocols: orofacial motor skill protocol, adapted from the Myofunctional Evaluation Guidelines; the Child Language Test ABFW - Phonology; and a simplified auditory processing evaluation. Descriptive and associative statistical analyses were performed using Epi Info software, release 6.04. Chi-square test was applied to compare proportion of events and analysis of variance was used to compare mean values. Significance was set at p≤0.05. Results: Of the studied subjects, 50.1% had at least one of the assessed disorders; of those, 33.6% had oral language disorder, 17.1% had orofacial motor skill impairment, and 27.3% had auditory processing disorder. There were significant associations between auditory processing skills’ impairment, oral language impairment and age, suggesting a decrease in the number of disorders with increasing age. Similarly, the variable "one or more speech, language and hearing disorders" was also associated with age. Conclusions: The prevalence of speech, language and hearing disorders in children was high, indicating the need for research and public health efforts to cope with this problem. PMID:26300524
Absence of modulatory action on haptic height perception with musical pitch
Geronazzo, Michele; Avanzini, Federico; Grassi, Massimo
2015-01-01
Although acoustic frequency is not a spatial property of physical objects, in common language, pitch, i.e., the psychological correlated of frequency, is often labeled spatially (i.e., “high in pitch” or “low in pitch”). Pitch-height is known to modulate (and interact with) the response of participants when they are asked to judge spatial properties of non-auditory stimuli (e.g., visual) in a variety of behavioral tasks. In the current study we investigated whether the modulatory action of pitch-height extended to the haptic estimation of height of a virtual step. We implemented a HW/SW setup which is able to render virtual 3D objects (stair-steps) haptically through a PHANTOM device, and to provide real-time continuous auditory feedback depending on the user interaction with the object. The haptic exploration was associated with a sinusoidal tone whose pitch varied as a function of the interaction point's height within (i) a narrower and (ii) a wider pitch range, or (iii) a random pitch variation acting as a control audio condition. Explorations were also performed with no sound (haptic only). Participants were instructed to explore the virtual step freely, and to communicate height estimation by opening their thumb and index finger to mimic the step riser height, or verbally by reporting the height in centimeters of the step riser. We analyzed the role of musical expertise by dividing participants into non-musicians and musicians. Results showed no effects of musical pitch on high-realistic haptic feedback. Overall there is no difference between the two groups in the proposed multimodal conditions. Additionally, we observed a different haptic response distribution between musicians and non-musicians when estimations of the auditory conditions are matched with estimations in the no sound condition. PMID:26441745
Theta Phase Synchronization Is the Glue that Binds Human Associative Memory.
Clouter, Andrew; Shapiro, Kimron L; Hanslmayr, Simon
2017-10-23
Episodic memories are information-rich, often multisensory events that rely on binding different elements [1]. The elements that will constitute a memory episode are processed in specialized but distinct brain modules. The binding of these elements is most likely mediated by fast-acting long-term potentiation (LTP), which relies on the precise timing of neural activity [2]. Theta oscillations in the hippocampus orchestrate such timing as demonstrated by animal studies in vitro [3, 4] and in vivo [5, 6], suggesting a causal role of theta activity for the formation of complex memory episodes, but direct evidence from humans is missing. Here, we show that human episodic memory formation depends on phase synchrony between different sensory cortices at the theta frequency. By modulating the luminance of visual stimuli and the amplitude of auditory stimuli, we directly manipulated the degree of phase synchrony between visual and auditory cortices. Memory for sound-movie associations was significantly better when the stimuli were presented in phase compared to out of phase. This effect was specific to theta (4 Hz) and did not occur in slower (1.7 Hz) or faster (10.5 Hz) frequencies. These findings provide the first direct evidence that episodic memory formation in humans relies on a theta-specific synchronization mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.
Skill dependent audiovisual integration in the fusiform induces repetition suppression.
McNorgan, Chris; Booth, James R
2015-02-01
Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.
Corona-Strauss, Farah I; Delb, Wolfgang; Schick, Bernhard; Strauss, Daniel J
2010-01-01
Auditory Brainstem Responses (ABRs) are used as objective method for diagnostics and quantification of hearing loss. Many methods for automatic recognition of ABRs have been developed, but none of them include the individual measurement setup in the analysis. The purpose of this work was to design a fast recognition scheme for chirp-evoked ABRs that is adjusted to the individual measurement condition using spontaneous electroencephalographic activity (SA). For the classification, the kernel-based novelty detection scheme used features based on the inter-sweep instantaneous phase synchronization as well as energy and entropy relations in the time-frequency domain. This method provided SA discrimination from stimulations above the hearing threshold with a minimum number of sweeps, i.e., 200 individual responses. It is concluded that the proposed paradigm, processing procedures and stimulation techniques improve the detection of ABRs in terms of the degree of objectivity, i.e., automation of procedure, and measurement time.
Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression
McNorgan, Chris; Booth, James R.
2015-01-01
Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276
Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention
Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei
2016-01-01
An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193
Miceli, Luca; Bednarova, Rym; Rizzardo, Alessandro; Samogin, Valentina; Della Rocca, Giorgio
2015-01-01
Objective Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app), assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual’s capacity to drive safely. Methods The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Results Performance is both age-related (r>0.5) and sex-related (female reaction times were significantly slower than those recorded for male subjects, P<0.05). Only 21% of the subjects were able to perform all four tests correctly. Conclusion We developed and fine-tuned a test called Safedrive that measures visual and auditory reaction times through a smartphone mobile device; the scope of the test is two-fold: to provide a clinical tool for the assessment of the driving capacity of individuals taking pain relief medication; to promote the sense of social responsibility in drivers who are on medication and provide these individuals with a means of testing their own capacity to drive safely. PMID:25709406
Zhang, Y; Li, D D; Chen, X W
2017-06-20
Objective: Case-control study analysis of the speech discrimination of unilateral microtia and external auditory canal atresia patients with normal hearing subjects in quiet and noisy environment. To understand the speech recognition results of patients with unilateral external auditory canal atresia and provide scientific basis for clinical early intervention. Method: Twenty patients with unilateral congenital microtia malformation combined external auditory canal atresia, 20 age matched normal subjects as control group. All subjects used Mandarin speech audiometry material, to test the speech discrimination scores (SDS) in quiet and noisy environment in sound field. Result: There's no significant difference of speech discrimination scores under the condition of quiet between two groups. There's a statistically significant difference when the speech signal in the affected side and noise in the nomalside (single syllable, double syllable, statements; S/N=0 and S/N=-10) ( P <0.05). There's no significant difference of speech discrimination scores when the speech signal in the nomalside and noise in the affected side. There's a statistically significant difference in condition of the signal and noise in the same side when used one-syllable word recognition (S/N=0 and S/N=-5) ( P <0.05), while double syllable word and statement has no statistically significant difference ( P >0.05). Conclusion: The speech discrimination scores of unilateral congenital microtia malformation patients with external auditory canal atresia under the condition of noise is lower than the normal subjects. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.
Kamel, Terez Boshra; Abd Elmonaem, Mahmoud Tarek; Khalil, Lobna Hamed; Goda, Mona Hamdy; Sanyelbhaa, Hossam; Ramzy, Mourad Alfy
2016-10-01
Chronic lung disease (CLD) in children represents a heterogeneous group of many clinico-pathological entities with risk of adverse impact of chronic or intermittent hypoxia. So far, few researchers have investigated the cognitive function in these children, and the role of auditory P300 in the assessment of their cognitive function has not been investigated yet. This study was designed to assess the cognitive functions among schoolchildren with different chronic pulmonary diseases using both auditory P300 and Stanford-Binet test. This cross-sectional study included 40 school-aged children who were suffering from chronic chest troubles other than asthma and 30 healthy children of similar age, gender and socioeconomic state as a control group. All subjects were evaluated through clinical examination, radiological evaluation and spirometry. Audiological evaluation included (basic otological examination, pure-tone, speech audiometry and immittancemetry). Cognitive function was assessed by auditory P300 and psychological evaluation using Stanford-Binet test (4th edition). Children with chronic lung diseases had significantly lower anthropometric measures compared to healthy controls. They had statistically significant lower IQ scores and delayed P300 latencies denoting lower cognitive abilities. Cognitive dysfunction correlated to severity of disease. P300 latencies were prolonged among hypoxic patients. Cognitive deficits in children with different chronic lung diseases were best detected using both Stanford-Binet test and auditory P300. P300 is an easy objective tool. P300 is affected early with hypoxia and could alarm subtle cognitive dysfunction.
Clinical significance and developmental changes of auditory-language-related gamma activity
Kojima, Katsuaki; Brown, Erik C.; Rothermel, Robert; Carlson, Alanna; Fuerst, Darren; Matsuzaki, Naoyuki; Shah, Aashit; Atkinson, Marie; Basha, Maysaa; Mittal, Sandeep; Sood, Sandeep; Asano, Eishi
2012-01-01
OBJECTIVE We determined the clinical impact and developmental changes of auditory-language-related augmentation of gamma activity at 50–120 Hz recorded on electrocorticography (ECoG). METHODS We analyzed data from 77 epileptic patients ranging 4 – 56 years in age. We determined the effects of seizure-onset zone, electrode location, and patient-age upon gamma-augmentation elicited by an auditory-naming task. RESULTS Gamma-augmentation was less frequently elicited within seizure-onset sites compared to other sites. Regardless of age, gamma-augmentation most often involved the 80–100 Hz frequency band. Gamma-augmentation initially involved bilateral superior-temporal regions, followed by left-side dominant involvement in the middle-temporal, medial-temporal, inferior-frontal, dorsolateral-premotor, and medial-frontal regions and concluded with bilateral inferior-Rolandic involvement. Compared to younger patients, those older than 10 years had a larger proportion of left dorsolateral-premotor and right inferior-frontal sites showing gamma-augmentation. The incidence of a post-operative language deficit requiring speech therapy was predicted by the number of resected sites with gamma-augmentation in the superior-temporal, inferior-frontal, dorsolateral-premotor, and inferior-Rolandic regions of the left hemisphere assumed to contain essential language function (r2=0.59; p=0.001; odds ratio=6.04 [95% confidence-interval: 2.26 to 16.15]). CONCLUSIONS Auditory-language-related gamma-augmentation can provide additional information useful to localize the primary language areas. SIGNIFICANCE These results derived from a large sample of patients support the utility of auditory-language-related gamma-augmentation in presurgical evaluation. PMID:23141882
Tarabichi, Osama; Kozin, Elliott D; Kanumuri, Vivek V; Barber, Samuel; Ghosh, Satra; Sitek, Kevin R; Reinshagen, Katherine; Herrmann, Barbara; Remenschneider, Aaron K; Lee, Daniel J
2018-03-01
Objective The radiologic evaluation of patients with hearing loss includes computed tomography and magnetic resonance imaging (MRI) to highlight temporal bone and cochlear nerve anatomy. The central auditory pathways are often not studied for routine clinical evaluation. Diffusion tensor imaging (DTI) is an emerging MRI-based modality that can reveal microstructural changes in white matter. In this systematic review, we summarize the value of DTI in the detection of structural changes of the central auditory pathways in patients with sensorineural hearing loss. Data Sources PubMed, Embase, and Cochrane. Review Methods We used the Preferred Reporting Items for Systematic Reviews and Meta-Analysis statement checklist for study design. All studies that included at least 1 sensorineural hearing loss patient with DTI outcome data were included. Results After inclusion and exclusion criteria were met, 20 articles were analyzed. Patients with bilateral hearing loss comprised 60.8% of all subjects. Patients with unilateral or progressive hearing loss and tinnitus made up the remaining studies. The auditory cortex and inferior colliculus (IC) were the most commonly studied regions using DTI, and most cases were found to have changes in diffusion metrics, such as fractional anisotropy, compared to normal hearing controls. Detectable changes in other auditory regions were reported, but there was a higher degree of variability. Conclusion White matter changes based on DTI metrics can be seen in patients with sensorineural hearing loss, but studies are few in number with modest sample sizes. Further standardization of DTI using a prospective study design with larger sample sizes is needed.
Input from the medial geniculate nucleus modulates amygdala encoding of fear memory discrimination.
Ferrara, Nicole C; Cullen, Patrick K; Pullins, Shane P; Rotondo, Elena K; Helmstetter, Fred J
2017-09-01
Generalization of fear can involve abnormal responding to cues that signal safety and is common in people diagnosed with post-traumatic stress disorder. Differential auditory fear conditioning can be used as a tool to measure changes in fear discrimination and generalization. Most prior work in this area has focused on elevated amygdala activity as a critical component underlying generalization. The amygdala receives input from auditory cortex as well as the medial geniculate nucleus (MgN) of the thalamus, and these synapses undergo plastic changes in response to fear conditioning and are major contributors to the formation of memory related to both safe and threatening cues. The requirement for MgN protein synthesis during auditory discrimination and generalization, as well as the role of MgN plasticity in amygdala encoding of discrimination or generalization, have not been directly tested. GluR1 and GluR2 containing AMPA receptors are found at synapses throughout the amygdala and their expression is persistently up-regulated after learning. Some of these receptors are postsynaptic to terminals from MgN neurons. We found that protein synthesis-dependent plasticity in MgN is necessary for elevated freezing to both aversive and safe auditory cues, and that this is accompanied by changes in the expressions of AMPA receptor and synaptic scaffolding proteins (e.g., SHANK) at amygdala synapses. This work contributes to understanding the neural mechanisms underlying increased fear to safety signals after stress. © 2017 Ferrara et al.; Published by Cold Spring Harbor Laboratory Press.
Auditory cortical responses in patients with cochlear implants
Burdo, S; Razza, S; Di Berardino, F; Tognola, G
2006-01-01
Summary Currently, the most commonly used electrophysiological tests for cochlear implant evaluation are Averaged Electrical Voltages (AEV), Electrical Advisory Brainstem Responses (EABR) and Neural Response Telemetry (NRT). The present paper focuses on the study of acoustic auditory cortical responses, or slow vertex responses, which are not widely used due to the difficulty in recording, especially in young children. Aims of this study were validation of slow vertex responses and their possible applications in monitoring postimplant results, particularly restoration of hearing and auditory maturation. In practice, the use of tone-bursts, also through hearing aids or cochlear implants, as in slow vertex responses, allows many more frequencies to be investigated and louder intensities to be reached than with other tests based on a click as stimulus. Study design focused on latencies of N1 and P2 slow vertex response peaks in cochlear implants. The study population comprised 45 implant recipients (aged 2 to 70 years), divided into 5 different homogeneous groups according to chronological age, age at onset of deafness, and age at implantation. For each subject, slow vertex responses and free-field auditory responses (PTAS) were recorded for tone-bursts at 500 and 2000 Hz before cochlear implant surgery (using hearing aid amplification) and during scheduled sessions at 3rd and 12th month after implant activation. Results showed that N1 and P2 latencies decreased in all groups starting from 3rd through 12th month after activation. Subjects implanted before school age or at least before age 8 yrs showed the widest latency changes. All subjects showed a reduction in the gap between subjective thresholds (obtained with free field auditory responses) and objective thresholds (obtained with slow vertex responses), obtained in presurgery stage and after cochlear implant. In conclusion, a natural evolution of neurophysiological cortical activities of the auditory pathway, over time, was found especially in young children with prelingual deafness and implanted in preschool age. Cochlear implantation appears to provide hearing restoration, demonstrated by the sharp reduction of the gap between subjective free field auditory responses and slow vertex responses threshold obtained with hearing aids vs. cochlear implant. PMID:16886849
Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Rahimi, Zahra; Mayahi, Anis
2017-02-01
It is important for clinician such as speech-language pathologists and audiologists to develop more efficient procedures to assess the development of auditory, speech and language skills in children using hearing aid and/or cochlear implant compared to their peers with normal hearing. So, the aim of study was the comparison of the performance of 5-to-7-year-old Persian-language children with and without hearing loss in visual-only, auditory-only, and audiovisual presentation of sentence repetition task. The research was administered as a cross-sectional study. The sample size was 92 Persian 5-7 year old children including: 60 with normal hearing and 32 with hearing loss. The children with hearing loss were recruited from Soroush rehabilitation center for Persian-language children with hearing loss in Shiraz, Iran, through consecutive sampling method. All the children had unilateral cochlear implant or bilateral hearing aid. The assessment tool was the Sentence Repetition Test. The study included three computer-based experiments including visual-only, auditory-only, and audiovisual. The scores were compared within and among the three groups through statistical tests in α = 0.05. The score of sentence repetition task between V-only, A-only, and AV presentation was significantly different in the three groups; in other words, the highest to lowest scores belonged respectively to audiovisual, auditory-only, and visual-only format in the children with normal hearing (P < 0.01), cochlear implant (P < 0.01), and hearing aid (P < 0.01). In addition, there was no significant correlationship between the visual-only and audiovisual sentence repetition scores in all the 5-to-7-year-old children (r = 0.179, n = 92, P = 0.088), but audiovisual sentence repetition scores were found to be strongly correlated with auditory-only scores in all the 5-to-7-year-old children (r = 0.943, n = 92, P = 0.000). According to the study's findings, audiovisual integration occurs in the 5-to-7-year-old Persian children using hearing aid or cochlear implant during sentence repetition similar to their peers with normal hearing. Therefore, it is recommended that audiovisual sentence repetition should be used as a clinical criterion for auditory development in Persian-language children with hearing loss. Copyright © 2016. Published by Elsevier B.V.
Spatial Attention Modulates the Precedence Effect
ERIC Educational Resources Information Center
London, Sam; Bishop, Christopher W.; Miller, Lee M.
2012-01-01
Communication and navigation in real environments rely heavily on the ability to distinguish objects in acoustic space. However, auditory spatial information is often corrupted by conflicting cues and noise such as acoustic reflections. Fortunately the brain can apply mechanisms at multiple levels to emphasize target information and mitigate such…
Prescriptive Teaching from the DTLA.
ERIC Educational Resources Information Center
Banas, Norma; Wills, I. H.
1979-01-01
The article (Part 2 of a series) discusses the Auditory Attention Span for Unrelated Words and the Visual Attention Span for Objects subtests of the Detroit Tests of Learning Aptitude. Skills measured and related factors influencing performance are among aspects considered. Suggestions for remediating deficits and capitalizing on strengths are…
Bratakos, M S; Reed, C M; Delhorne, L A; Denesvich, G
2001-06-01
The objective of this study was to compare the effects of a single-band envelope cue as a supplement to speechreading of segmentals and sentences when presented through either the auditory or tactual modality. The supplementary signal, which consisted of a 200-Hz carrier amplitude-modulated by the envelope of an octave band of speech centered at 500 Hz, was presented through a high-performance single-channel vibrator for tactual stimulation or through headphones for auditory stimulation. Normal-hearing subjects were trained and tested on the identification of a set of 16 medial vowels in /b/-V-/d/ context and a set of 24 initial consonants in C-/a/-C context under five conditions: speechreading alone (S), auditory supplement alone (A), tactual supplement alone (T), speechreading combined with the auditory supplement (S+A), and speechreading combined with the tactual supplement (S+T). Performance on various speech features was examined to determine the contribution of different features toward improvements under the aided conditions for each modality. Performance on the combined conditions (S+A and S+T) was compared with predictions generated from a quantitative model of multi-modal performance. To explore the relationship between benefits for segmentals and for connected speech within the same subjects, sentence reception was also examined for the three conditions of S, S+A, and S+T. For segmentals, performance generally followed the pattern of T < A < S < S+T < S+A. Significant improvements to speechreading were observed with both the tactual and auditory supplements for consonants (10 and 23 percentage-point improvements, respectively), but only with the auditory supplement for vowels (a 10 percentage-point improvement). The results of the feature analyses indicated that improvements to speechreading arose primarily from improved performance on the features low and tense for vowels and on the features voicing, nasality, and plosion for consonants. These improvements were greater for auditory relative to tactual presentation. When predicted percent-correct scores for the multi-modal conditions were compared with observed scores, the predicted values always exceeded observed values and the predictions were somewhat more accurate for the S+A than for the S+T conditions. For sentences, significant improvements to speechreading were observed with both the auditory and tactual supplements for high-context materials but again only with the auditory supplement for low-context materials. The tactual supplement provided a relative gain to speechreading of roughly 25% for all materials except low-context sentences (where gain was only 10%), whereas the auditory supplement provided relative gains of roughly 50% (for vowels, consonants, and low-context sentences) to 75% (for high-context sentences). The envelope cue provides a significant benefit to the speechreading of consonant segments when presented through either the auditory or tactual modality and of vowel segments through audition only. These benefits were found to be related to the reception of the same types of features under both modalities (voicing, manner, and plosion for consonants and low and tense for vowels); however, benefits were larger for auditory compared with tactual presentation. The benefits observed for segmentals appear to carry over into benefits for sentence reception under both modalities.
Berger, Joel I; Owen, William; Wilson, Caroline A; Hockley, Adam; Coomber, Ben; Palmer, Alan R; Wallace, Mark N
2018-01-15
Animal models of tinnitus are essential for determining the underlying mechanisms and testing pharmacotherapies. However, there is doubt over the validity of current behavioural methods for detecting tinnitus. Here, we applied a stimulus paradigm widely used in a behavioural test (gap-induced inhibition of the acoustic startle reflex GPIAS) whilst recording from the auditory cortex, and showed neural response changes that mirror those found in the behavioural tests. We implanted guinea pigs (GPs) with electrocorticographic (ECoG) arrays and recorded baseline auditory cortical responses to a startling stimulus. When a gap was inserted in otherwise continuous background noise prior to the startling stimulus, there was a clear reduction in the subsequent evoked response (termed gap-induced reductions in evoked potentials; GIREP), suggestive of a neural analogue of the GPIAS test. We then unilaterally exposed guinea pigs to narrowband noise (left ear; 8-10 kHz; 1 h) at one of two different sound levels - either 105 dB SPL or 120 dB SPL - and recorded the same responses seven-to-ten weeks following the noise exposure. Significant deficits in GIREP were observed for all areas of the auditory cortex (AC) in the 120 dB-exposed GPs, but not in the 105 dB-exposed GPs. These deficits could not simply be accounted for by changes in response amplitudes. Furthermore, in the contralateral (right) caudal AC we observed a significant increase in evoked potential amplitudes across narrowband background frequencies in both 105 dB and 120 dB-exposed GPs. Taken in the context of the large body of literature that has used the behavioural test as a demonstration of the presence of tinnitus, these results are suggestive of objective neural correlates of the presence of noise-induced tinnitus and hyperacusis. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Auditory object perception: A neurobiological model and prospective review.
Brefczynski-Lewis, Julie A; Lewis, James W
2017-10-01
Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic and ontogenetic frameworks both entail cortical network maturations that are proposed to at least in part be organized around a number of universal acoustic-semantic signal attributes of natural sounds, which are addressed herein. Copyright © 2017. Published by Elsevier Ltd.
A device for human ultrasonic echolocation
Gaub, Benjamin M.; Rodgers, Chris C.; Li, Crystal; DeWeese, Michael R.; Harper, Nicol S.
2015-01-01
Objective We present a device that combines principles of ultrasonic echolocation and spatial hearing to provide human users with environmental cues that are 1) not otherwise available to the human auditory system and 2) richer in object, and spatial information than the more heavily processed sonar cues of other assistive devices. The device consists of a wearable headset with an ultrasonic emitter and stereo microphones with affixed artificial pinnae. The goal of this study is to describe the device and evaluate the utility of the echoic information it provides. Methods The echoes of ultrasonic pulses were recorded and time-stretched to lower their frequencies into the human auditory range, then played back to the user. We tested performance among naive and experienced sighted volunteers using a set of localization experiments in which the locations of echo-reflective surfaces were judged using these time stretched echoes. Results Naive subjects were able to make laterality and distance judgments, suggesting that the echoes provide innately useful information without prior training. Naive subjects were generally unable to make elevation judgments from recorded echoes. However trained subjects demonstrated an ability to judge elevation as well. Conclusion This suggests that the device can be used effectively to examine the environment and that the human auditory system can rapidly adapt to these artificial echolocation cues. Significance Interpreting and interacting with the external world constitutes a major challenge for persons who are blind or visually impaired. This device has the potential to aid blind people in interacting with their environment. PMID:25608301
Auditory abilities of speakers who persisted, or recovered, from stuttering
Howell, Peter; Davis, Stephen; Williams, Sheila M.
2006-01-01
Objective The purpose of this study was to see whether participants who persist in their stutter have poorer sensitivity in a backward masking task compared to those participants who recover from their stutter. Design The auditory sensitivity of 30 children who stutter was tested on absolute threshold, simultaneous masking, backward masking with a broadband and with a notched noise masker. The participants had been seen and diagnosed as stuttering at least 1 year before their 12th birthday. The participants were assessed again at age 12 plus to establish whether their stutter had persisted or recovered. Persistence or recovery was based on participant's, parent's and researcher's assessment and Riley's [Riley, G. D. (1994). Stuttering severity instrument for children and adults (3rd ed.). Austin, TX: Pro-Ed.] Stuttering Severity Instrument-3. Based on this assessment, 12 speakers had persisted and 18 had recovered from stuttering. Results Thresholds differed significantly between persistent and recovered groups for the broadband backward-masked stimulus (thresholds being higher for the persistent group). Conclusions Backward masking performance at teenage is one factor that distinguishes speakers who persist in their stutter from those who recover. Education objectives: Readers of this article should: (1) explain why auditory factors have been implicated in stuttering; (2) summarise the work that has examined whether peripheral, and/or central, hearing are problems in stuttering; (3) explain how the hearing ability of persistent and recovered stutterers may differ; (4) discuss how hearing disorders have been implicated in other language disorders. PMID:16920188
Alais, David; Cass, John
2010-06-23
An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.
Activity in Human Auditory Cortex Represents Spatial Separation Between Concurrent Sounds.
Shiell, Martha M; Hausfeld, Lars; Formisano, Elia
2018-05-23
The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene. SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent. Copyright © 2018 the authors 0270-6474/18/384977-08$15.00/0.
Zhang, Yingli; Liang, Wei; Yang, Shichang; Dai, Ping; Shen, Lijuan; Wang, Changhong
2013-01-01
Objective: This study assessed the efficacy and tolerability of repetitive transcranial magnetic stimulation for treatment of auditory hallucination of patients with schizophrenia spectrum disorders. Data Sources: Online literature retrieval was conducted using PubMed, ISI Web of Science, EMBASE, Medline and Cochrane Central Register of Controlled Trials databases from January 1985 to May 2012. Key words were “transcranial magnetic stimulation”, “TMS”, “repetitive transcranial magnetic stimulation”, and “hallucination”. Study Selection: Selected studies were randomized controlled trials assessing therapeutic efficacy of repetitive transcranial magnetic stimulation for hallucination in patients with schizophrenia spectrum disorders. Experimental intervention was low-frequency repetitive transcranial magnetic stimulation in left temporoparietal cortex for treatment of auditory hallucination in schizophrenia spectrum disorders. Control groups received sham stimulation. Main Outcome Measures: The primary outcome was total scores of Auditory Hallucinations Rating Scale, Auditory Hallucination Subscale of Psychotic Symptom Rating Scale, Positive and Negative Symptom Scale-Auditory Hallucination item, and Hallucination Change Scale. Secondary outcomes included response rate, global mental state, adverse effects and cognitive function. Results: Seventeen studies addressing repetitive transcranial magnetic stimulation for treatment of schizophrenia spectrum disorders were screened, with controls receiving sham stimulation. All data were completely effective, involving 398 patients. Overall mean weighted effect size for repetitive transcranial magnetic stimulation versus sham stimulation was statistically significant (MD = –0.42, 95%CI: –0.64 to –0.20, P = 0.000 2). Patients receiving repetitive transcranial magnetic stimulation responded more frequently than sham stimulation (OR = 2.94, 95%CI: 1.39 to 6.24, P = 0.005). No significant differences were found between active repetitive transcranial magnetic stimulation and sham stimulation for positive or negative symptoms. Compared with sham stimulation, active repetitive transcranial magnetic stimulation had equivocal outcome in cognitive function and commonly caused headache and facial muscle twitching. Conclusion: Repetitive transcranial magnetic stimulation is a safe and effective treatment for auditory hallucination in schizophrenia spectrum disorders. PMID:25206578
Melo, Ândrea de; Mezzomo, Carolina Lisbôa; Garcia, Michele Vargas; Biaggio, Eliara Pinto Vieira
2018-01-01
Introduction Computerized auditory training (CAT) has been building a good reputation in the stimulation of auditory abilities in cases of auditory processing disorder (APD). Objective To measure the effects of CAT in students with APD, with typical or atypical phonological acquisition, through electrophysiological and subjective measures, correlating them pre- and post-therapy. Methods The sample for this study includes14 children with APD, subdivided into children with APD and typical phonological acquisition (G1), and children with APD and atypical phonological acquisition (G2). Phonological evaluation of children (PEC), long latency auditory evoked potential (LLAEP) and scale of auditory behaviors (SAB) were conducted to help with the composition of the groups and with the therapeutic intervention. The therapeutic intervention was performed using the software Escuta Ativa (CTS Informática, Pato Branco, Brazil) in 12 sessions of 30 minutes, twice a week. For data analysis, the appropriate statistical tests were used. Results A decrease in the latency of negative wave N2 and the positive wave P3 in the left ear in G1, and a decrease of P2 in the right ear in G2 were observed. In the analysis comparing the pre- and post-CAT groups, there was a significant difference in P1 latency in the left ear and P2 latency in the right ear, pre-intervention. Furthermore, eight children had an absence of the P3 wave, pre-CAT, but after the intervention, all of them presented the P3 wave. There were changes in the SAB score pre- and post-CAT in both groups. The presence of correlation between the scale and some LLAEP components was observed. Conclusion The CAT produced an electrophysiological modification, which became evident in the effects of the effects of neural plasticity after CAT. The SAB proved to be useful in measuring the therapeutic effects of the intervention. Moreover, there were behavioral changes in the SAB (higher scores) and correlation with LLAEP.
de Castro, Bianca C R; Guida, Heraldo L; Roque, Adriano L; de Abreu, Luiz Carlos; Ferreira, Celso; Marcomini, Renata S; Monteiro, Carlos B M; Adami, Fernando; Ribeiro, Viviane F; Fonseca, Fernando L A; Santos, Vilma N S; Valenti, Vitor E
2014-01-01
It is poor in the literature the behavior of the geometric indices of heart rate variability (HRV) during the musical auditory stimulation. The objective is to investigate the acute effects of classic musical auditory stimulation on the geometric indexes of HRV in women in response to the postural change maneuver (PCM). We evaluated 11 healthy women between 18 and 25 years old. We analyzed the following indices: Triangular index, Triangular interpolation of RR intervals and Poincarι plot (standard deviation of the instantaneous variability of the beat-to beat heart rate [SD1], standard deviation of long-term continuous RR interval variability and Ratio between the short - and long-term variations of RR intervals [SD1/SD2] ratio). HRV was recorded at seated rest for 10 min. The women quickly stood up from a seated position in up to 3 s and remained standing still for 15 min. HRV was recorded at the following periods: Rest, 0-5 min, 5-10 min and 10-15 min during standing. In the second protocol, the subject was exposed to auditory musical stimulation (Pachelbel-Canon in D) for 10 min at seated position before standing position. Shapiro-Wilk to verify normality of data and ANOVA for repeated measures followed by the Bonferroni test for parametric variables and Friedman's followed by the Dunn's posttest for non-parametric distributions. In the first protocol, all indices were reduced at 10-15 min after the volunteers stood up. In the protocol musical auditory stimulation, the SD1 index was reduced at 5-10 min after the volunteers stood up compared with the music period. The SD1/SD2 ratio was decreased at control and music period compared with 5-10 min after the volunteers stood up. Musical auditory stimulation attenuates the cardiac autonomic responses to the PCM.
Development of a Pitch Discrimination Screening Test for Preschool Children.
Abramson, Maria Kulick; Lloyd, Peter J
2016-04-01
There is a critical need for tests of auditory discrimination for young children as this skill plays a fundamental role in the development of speaking, prereading, reading, language, and more complex auditory processes. Frequency discrimination is important with regard to basic sensory processing affecting phonological processing, dyslexia, measurements of intelligence, auditory memory, Asperger syndrome, and specific language impairment. This study was performed to determine the clinical feasibility of the Pitch Discrimination Test (PDT) to screen the preschool child's ability to discriminate some of the acoustic demands of speech perception, primarily pitch discrimination, without linguistic content. The PDT used brief speech frequency tones to gather normative data from preschool children aged 3 to 5 yrs. A cross-sectional study was used to gather data regarding the pitch discrimination abilities of a sample of typically developing preschool children, between 3 and 5 yrs of age. The PDT consists of ten trials using two pure tones of 100-msec duration each, and was administered in an AA or AB forced-choice response format. Data from 90 typically developing preschool children between the ages of 3 and 5 yrs were used to provide normative data. Nonparametric Mann-Whitney U-testing was used to examine the effects of age as a continuous variable on pitch discrimination. The Kruskal-Wallis test was used to determine the significance of age on performance on the PDT. Spearman rank was used to determine the correlation of age and performance on the PDT. Pitch discrimination of brief tones improved significantly from age 3 yrs to age 4 yrs, as well as from age 3 yrs to the age 4- and 5-yrs group. Results indicated that between ages 3 and 4 yrs, children's auditory discrimination of pitch improved on the PDT. The data showed that children can be screened for auditory discrimination of pitch beginning with age 4 yrs. The PDT proved to be a time efficient, feasible tool for a simple form of frequency discrimination screening in the preschool population before the age where other diagnostic tests of auditory processing disorders can be used. American Academy of Audiology.
Research on Multimedia Access to Microcomputers for Visually Impaired Youth.
ERIC Educational Resources Information Center
Ashcroft, S. C.
This final report discusses the outcomes of a federally funded project that studied visual, auditory, and tactual methods designed to give youth with visual impairments access to microcomputers for curricular, prevocational, and avocational purposes. The objectives of the project were: (1) to research microcomputer systems that could be made…
Perspectives of Elementary School Teachers on Outdoor Education
ERIC Educational Resources Information Center
Palavan, Ozcan; Cicek, Volkan; Atabay, Merve
2016-01-01
Outdoor education stands out as one of the methods to deliver the desired educational outcomes taking the needs of the students, teachers and the curricular objectives into consideration. Outdoor education focuses on experimental, hands-on learning in real-life environments through senses, e.g., through visual, auditory, and tactile means,…
Desired clearance around a vehicle while parking or performing low speed maneuvers.
DOT National Transportation Integrated Search
2004-10-01
This experiment examined how close to objects (such as a wall or another vehicle) people would drive when parking. The findings will to be used as a basis for visual and/or auditory warnings provided by parking assistance systems. A total of 16 peopl...
Childhood Onset Schizophrenia: High Rate of Visual Hallucinations
ERIC Educational Resources Information Center
David, Christopher N.; Greenstein, Deanna; Clasen, Liv; Gochman, Pete; Miller, Rachel; Tossell, Julia W.; Mattai, Anand A.; Gogtay, Nitin; Rapoport, Judith L.
2011-01-01
Objective: To document high rates and clinical correlates of nonauditory hallucinations in childhood onset schizophrenia (COS). Method: Within a sample of 117 pediatric patients (mean age 13.6 years), diagnosed with COS, the presence of auditory, visual, somatic/tactile, and olfactory hallucinations was examined using the Scale for the Assessment…
Cross-Situational Learning of Minimal Word Pairs
ERIC Educational Resources Information Center
Escudero, Paola; Mulak, Karen E.; Vlach, Haley A.
2016-01-01
"Cross-situational statistical learning" of words involves tracking co-occurrences of auditory words and objects across time to infer word-referent mappings. Previous research has demonstrated that learners can infer referents across sets of very phonologically distinct words (e.g., WUG, DAX), but it remains unknown whether learners can…
The Tactile Continuity Illusion
ERIC Educational Resources Information Center
Kitagawa, Norimichi; Igarashi, Yuka; Kashino, Makio
2009-01-01
We can perceive the continuity of an object or event by integrating spatially/temporally discrete sensory inputs. The mechanism underlying this perception of continuity has intrigued many researchers and has been well documented in both the visual and auditory modalities. The present study shows for the first time to our knowledge that an illusion…
Auditory-motor learning influences auditory memory for music.
Brown, Rachel M; Palmer, Caroline
2012-05-01
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
Computer-Based Auditory Training Programs for Children with Hearing Impairment - A Scoping Review.
Nanjundaswamy, Manohar; Prabhu, Prashanth; Rajanna, Revathi Kittur; Ningegowda, Raghavendra Gulaganji; Sharma, Madhuri
2018-01-01
Introduction Communication breakdown, a consequence of hearing impairment (HI), is being fought by fitting amplification devices and providing auditory training since the inception of audiology. The advances in both audiology and rehabilitation programs have led to the advent of computer-based auditory training programs (CBATPs). Objective To review the existing literature documenting the evidence-based CBATPs for children with HIs. Since there was only one such article, we also chose to review the commercially available CBATPs for children with HI. The strengths and weaknesses of the existing literature were reviewed in order to improve further researches. Data Synthesis Google Scholar and PubMed databases were searched using various combinations of keywords. The participant, intervention, control, outcome and study design (PICOS) criteria were used for the inclusion of articles. Out of 124 article abstracts reviewed, 5 studies were shortlisted for detailed reading. One among them satisfied all the criteria, and was taken for review. The commercially available programs were chosen based on an extensive search in Google. The reviewed article was well-structured, with appropriate outcomes. The commercially available programs cover many aspects of the auditory training through a wide range of stimuli and activities. Conclusions There is a dire need for extensive research to be performed in the field of CBATPs to establish their efficacy, also to establish them as evidence-based practices.
Computer-Based Auditory Training Programs for Children with Hearing Impairment – A Scoping Review
Nanjundaswamy, Manohar; Prabhu, Prashanth; Rajanna, Revathi Kittur; Ningegowda, Raghavendra Gulaganji; Sharma, Madhuri
2018-01-01
Introduction Communication breakdown, a consequence of hearing impairment (HI), is being fought by fitting amplification devices and providing auditory training since the inception of audiology. The advances in both audiology and rehabilitation programs have led to the advent of computer-based auditory training programs (CBATPs). Objective To review the existing literature documenting the evidence-based CBATPs for children with HIs. Since there was only one such article, we also chose to review the commercially available CBATPs for children with HI. The strengths and weaknesses of the existing literature were reviewed in order to improve further researches. Data Synthesis Google Scholar and PubMed databases were searched using various combinations of keywords. The participant, intervention, control, outcome and study design (PICOS) criteria were used for the inclusion of articles. Out of 124 article abstracts reviewed, 5 studies were shortlisted for detailed reading. One among them satisfied all the criteria, and was taken for review. The commercially available programs were chosen based on an extensive search in Google. The reviewed article was well-structured, with appropriate outcomes. The commercially available programs cover many aspects of the auditory training through a wide range of stimuli and activities. Conclusions There is a dire need for extensive research to be performed in the field of CBATPs to establish their efficacy, also to establish them as evidence-based practices. PMID:29371904
Amin, Sanjiv B; Wang, Hongyue; Laroia, Nirupama; Orlando, Mark
2016-01-01
Objective To evaluate if unbound bilirubin is a better predictor of auditory neuropathy spectrum disorder (ANSD) than total serum bilirubin (TSB) or the bilirubin albumin molar ratio (BAMR) in late preterm and term neonates with severe jaundice (TSB ≥ 20 mg/dL or TSB that met exchange transfusion criteria). Study design Infants ≥ 34 weeks gestational age with severe jaundice during the first two weeks of life were eligible for the prospective observational study. A comprehensive auditory evaluation was performed within 72 hours of peak TSB. ANSD was defined as absent or abnormal auditory brainstem evoked response waveform morphology at 80 decibel click intensity in the presence of normal outer hair cell function. TSB, serum albumin, and unbound bilirubin were measured using the colorimetric, bromocresol green, and modified peroxidase method, respectively. Results Five of 44 infants developed ANSD. By logistic regression, peak unbound bilirubin but not peak TSB or peak BAMR was associated with ANSD (odds ratio 4.6, 95% CI: 1.6-13.5, p = 0.002). On comparing receiver operating characteristic curves, the area under the curve (AUC) for unbound bilirubin (0.92) was significantly greater (p = 0.04) compared with the AUC for TSB (0.50) or BAMR (0.62). Conclusions Unbound bilirubin is a more sensitive and specific predictor of ANSD than TSB or BAMR in late preterm and term infants with severe jaundice. PMID:26952116
Liu, Zhi; Sun, Yongzhu; Chang, Haifeng; Cui, Pengcheng
2014-01-01
Objective This study was designed to establish a low dose salicylate-induced tinnitus rat model and to investigate whether central or peripheral auditory system is involved in tinnitus. Methods Lick suppression ratio (R), lick count and lick latency of conditioned rats in salicylate group (120 mg/kg, intraperitoneally) and saline group were first compared. Bilateral auditory nerves were ablated in unconditioned rats and lick count and lick latency were compared before and after ablation. The ablation was then performed in conditioned rats and lick count and lick latency were compared between salicylate group and saline group and between ablated and unablated salicylate groups. Results Both the R value and the lick count in salicylate group were significantly higher than those in saline group and lick latency in salicylate group was significantly shorter than that in saline group. No significant changes were observed in lick count and lick latency before and after ablation. After ablation, lick count and lick latency in salicylate group were significantly higher and shorter respectively than those in saline group, but they were significantly lower and longer respectively than those in unablated salicylate group. Conclusion A low dose of salicylate (120 mg/kg) can induce tinnitus in rats and both central and peripheral auditory systems participate in the generation of salicylate-induced tinnitus. PMID:25269067
van den Hurk, Job; Van Baelen, Marc; Op de Beeck, Hans P.
2017-01-01
To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience. PMID:28507127
2016-01-01
Abstract Successful language comprehension critically depends on our ability to link linguistic expressions to the entities they refer to. Without reference resolution, newly encountered language cannot be related to previously acquired knowledge. The human experience includes many different types of referents, some visual, some auditory, some very abstract. Does the neural basis of reference resolution depend on the nature of the referents, or do our brains use a modality-general mechanism for linking meanings to referents? Here we report evidence for both. Using magnetoencephalography (MEG), we varied both the modality of referents, which consisted either of visual or auditory objects, and the point at which reference resolution was possible within sentences. Source-localized MEG responses revealed brain activity associated with reference resolution that was independent of the modality of the referents, localized to the medial parietal lobe and starting ∼415 ms after the onset of reference resolving words. A modality-specific response to reference resolution in auditory domains was also found, in the vicinity of auditory cortex. Our results suggest that referential language processing cannot be reduced to processing in classical language regions and representations of the referential domain in modality-specific neural systems. Instead, our results suggest that reference resolution engages medial parietal cortex, which supports a mechanism for referential processing regardless of the content modality. PMID:28058272