Science.gov

Sample records for auditory spatial learning

  1. Visual influences on auditory spatial learning

    PubMed Central

    King, Andrew J.

    2008-01-01

    The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967

  2. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  3. Expression of synaptic proteins in the hippocampus and spatial learning in chicks following prenatal auditory stimulation.

    PubMed

    Chaudhury, Sraboni; Jain, Suman; Wadhwa, Shashi

    2010-07-01

    Prenatal auditory stimulation by species-specific sound influences the expression and levels of calcium-binding proteins in the chick hippocampus, which is important to learning and memory. Stimulation by sitar music additionally produces structural changes in the hippocampus. Synapse density, which influences the synaptic plasticity, is also increased following both types of sound stimulation. Here we report the expression of mRNA as well as levels of synaptic proteins (synaptophysin, synapsin I and PSD-95) in the hippocampus of developing chicks subjected to prenatal auditory stimulation. Further, to evaluate the behavioral outcome following acoustic stimulation, posthatch day 1 (PH1) chicks were analyzed by T-maze test for spatial learning. Fertilized zero day eggs were incubated under normal conditions and subjected to patterned sounds of species-specific or sitar music at 65 dB levels for 15 min/h over 24 h at a frequency range of 100-6,300 Hz for a period of 11 days from embryonic day (E) 10 until hatching. Following both types of prenatal acoustic stimulation, a significant increase in the levels of synaptophysin mRNA and protein was found from E12, whereas that of synapsin I and PSD-95 was observed from E16, suggesting early maturation of the excitatory synapse. A significant decrease in the time taken to reach the target over the 3 trials in both sound-stimulated groups indicates improved spatial learning. In the music-stimulated group, however, the time taken to reach the target was reduced from the very first trial, which may point to an involvement of other behavioral attributes in facilitating spatial navigation. Copyright 2010 S. Karger AG, Basel.

  4. Incidental Auditory Category Learning

    PubMed Central

    Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.

    2015-01-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588

  5. Auditory-motor learning influences auditory memory for music.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  6. Auditory spatial processing in Alzheimer's disease.

    PubMed

    Golden, Hannah L; Nicholas, Jennifer M; Yong, Keir X X; Downey, Laura E; Schott, Jonathan M; Mummery, Catherine J; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer's disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer's disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer's disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer's disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer's disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer's disease

  7. Spatial auditory processing in pinnipeds

    NASA Astrophysics Data System (ADS)

    Holt, Marla M.

    Given the biological importance of sound for a variety of activities, pinnipeds must be able to obtain spatial information about their surroundings thorough acoustic input in the absence of other sensory cues. The three chapters of this dissertation address spatial auditory processing capabilities of pinnipeds in air given that these amphibious animals use acoustic signals for reproduction and survival on land. Two chapters are comparative lab-based studies that utilized psychophysical approaches conducted in an acoustic chamber. Chapter 1 addressed the frequency-dependent sound localization abilities at azimuth of three pinniped species (the harbor seal, Phoca vitulina, the California sea lion, Zalophus californianus, and the northern elephant seal, Mirounga angustirostris). While performances of the sea lion and harbor seal were consistent with the duplex theory of sound localization, the elephant seal, a low-frequency hearing specialist, showed a decreased ability to localize the highest frequencies tested. In Chapter 2 spatial release from masking (SRM), which occurs when a signal and masker are spatially separated resulting in improvement in signal detectability relative to conditions in which they are co-located, was determined in a harbor seal and sea lion. Absolute and masked thresholds were measured at three frequencies and azimuths to determine the detection advantages afforded by this type of spatial auditory processing. Results showed that hearing sensitivity was enhanced by up to 19 and 12 dB in the harbor seal and sea lion, respectively, when the signal and masker were spatially separated. Chapter 3 was a field-based study that quantified both sender and receiver variables of the directional properties of male northern elephant seal calls produce within communication system that serves to delineate dominance status. This included measuring call directivity patterns, observing male-male vocally-mediated interactions, and an acoustic playback study

  8. Auditory spatial processing in the human cortex.

    PubMed

    Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C

    2012-12-01

    The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.

  9. The Perceptual Determinants of Repetition Learning in Auditory Space

    ERIC Educational Resources Information Center

    Parmentier, Fabrice B. R.; Maybery, Murray T.; Huitson, Matthew; Jones, Dylan M.

    2008-01-01

    The present study includes seven experiments examining the effect of repetition learning (Hebb effect) on auditory spatial serial recall. Participants were asked to remember sequences of spatial locations marked by auditory stimuli, where one sequence was repeated across trials. Consistent with the proposition that the spatial scattering of…

  10. Tactile feedback improves auditory spatial localization.

    PubMed

    Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  11. Auditory training in students with learning disabilities.

    PubMed

    Pinheiro, Fábio Henrique; Capellini, Simone Aparecida

    2010-01-01

    auditory training program in students with learning disabilities. to verify the efficacy of an auditory training program in students with learning disabilities; to compare the results of students with and without learning disabilities, who were and were not submitted to the auditory training program, in pre and post-testing. participants were 40 students who were divided according to the following: GI, subdivided in: GIe (10 students with learning disabilities who were submitted to the program), GIc (10 students with learning difficulties who were not submitted to auditory training) and GII, subdivided in: GIIe (10 students without learning difficulties submitted to the auditory training program) and GIIc (10 students without learning difficulties who were not submitted to auditory training). The auditory training program Audio Training was used. the results indicate that GI presented a lower performance when compared to GII in activities related to auditory skills and phonological awareness. When comparing the pre and post-testing results, GIe and GIIe presented better performances in activities involving auditory skills and phonological awareness after the auditory training program. the performance of the students with learning disabilities in auditory and phonological tasks is lower when compared to the students without learning disabilities. The use of the auditory training program was effective and allowed students to develop these skills.

  12. Devices and Procedures for Auditory Learning.

    ERIC Educational Resources Information Center

    Ling, Daniel

    1986-01-01

    The article summarizes information on assistive devices (hearing aids, cochlear implants, tactile aids, visual aids) and rehabilitation procedures (auditory training, speechreading, cued speech, and speech production) to aid the auditory learning of the hearing impaired.(DB)

  13. Auditory Spatial Perception without Vision

    PubMed Central

    Voss, Patrice

    2016-01-01

    Valuable insights into the role played by visual experience in shaping spatial representations can be gained by studying the effects of visual deprivation on the remaining sensory modalities. For instance, it has long been debated how spatial hearing evolves in the absence of visual input. While several anecdotal accounts tend to associate complete blindness with exceptional hearing abilities, experimental evidence supporting such claims is, however, matched by nearly equal amounts of evidence documenting spatial hearing deficits. The purpose of this review is to summarize the key findings which support either enhancements or deficits in spatial hearing observed following visual loss and to provide a conceptual framework that isolates the specific conditions under which they occur. Available evidence will be examined in terms of spatial dimensions (horizontal, vertical, and depth perception) and in terms of frames of reference (egocentric and allocentric). Evidence suggests that while early blind individuals show superior spatial hearing in the horizontal plane, they also show significant deficits in the vertical plane. Potential explanations underlying these contrasting findings will be discussed. Early blind individuals also show spatial hearing impairments when performing tasks that require the use of an allocentric frame of reference. Results obtained with late-onset blind individuals suggest that early visual experience plays a key role in the development of both spatial hearing enhancements and deficits. PMID:28066286

  14. Auditory Spatial Receptive Fields Created by Multiplication

    NASA Astrophysics Data System (ADS)

    Peña, José Luis; Konishi, Masakazu

    2001-04-01

    Examples of multiplication by neurons or neural circuits are scarce, although many computational models use this basic operation. The owl's auditory system computes interaural time (ITD) and level (ILD) differences to create a two-dimensional map of auditory space. Space-specific neurons are selective for combinations of ITD and ILD, which define, respectively, the horizontal and vertical dimensions of their receptive fields. A multiplication of separate postsynaptic potentials tuned to ITD and ILD, rather than an addition, can account for the subthreshold responses of these neurons to ITD-ILD pairs. Other nonlinear processes improve the spatial tuning of the spike output and reduce the fit to the multiplicative model.

  15. Auditory Spatial Recalibration in Congenital Blind Individuals

    PubMed Central

    Finocchietti, Sara; Cappagli, Giulia; Gori, Monica

    2017-01-01

    Blind individuals show impairments for auditory spatial skills that require complex spatial representation of the environment. We suggest that this is partially due to the egocentric frame of reference used by blind individuals. Here we investigate the possibility of reducing the mentioned auditory spatial impairments with an audio-motor training. Our hypothesis is that the association between a motor command and the corresponding movement's sensory feedback can provide an allocentric frame of reference and consequently help blind individuals in understanding complex spatial relationships. Subjects were required to localize the end point of a moving sound before and after either 2-min of audio-motor training or a complete rest. During the training, subjects were asked to move their hand, and consequently the sound source, to freely explore the space around the setup and the body. Both congenital blind (N = 20) and blindfolded healthy controls (N = 28) participated in the study. Results suggest that the audio-motor training was effective in improving space perception of blind individuals. The improvement was not observed in those subjects that did not perform the training. This study demonstrates that it is possible to recalibrate the auditory spatial representation in congenital blind individuals with a short audio-motor training and provides new insights for rehabilitation protocols in blind people. PMID:28261053

  16. Auditory Spatial Attention Representations in the Human Cerebral Cortex

    PubMed Central

    Kong, Lingqiang; Michalka, Samantha W.; Rosen, Maya L.; Sheremata, Summer L.; Swisher, Jascha D.; Shinn-Cunningham, Barbara G.; Somers, David C.

    2014-01-01

    Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753

  17. Auditory spatial attention representations in the human cerebral cortex.

    PubMed

    Kong, Lingqiang; Michalka, Samantha W; Rosen, Maya L; Sheremata, Summer L; Swisher, Jascha D; Shinn-Cunningham, Barbara G; Somers, David C

    2014-03-01

    Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes.

  18. The plastic ear and perceptual relearning in auditory spatial perception

    PubMed Central

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497

  19. Spatial Stream Segregation by Auditory Cortical Neurons

    PubMed Central

    Bremen, Peter

    2013-01-01

    In a complex auditory scene, a “cocktail party” for example, listeners can disentangle multiple competing sequences of sounds. A recent psychophysical study in our laboratory demonstrated a robust spatial component of stream segregation showing ∼8° acuity. Here, we recorded single- and multiple-neuron responses from the primary auditory cortex of anesthetized cats while presenting interleaved sound sequences that human listeners would experience as segregated streams. Sequences of broadband sounds alternated between pairs of locations. Neurons synchronized preferentially to sounds from one or the other location, thereby segregating competing sound sequences. Neurons favoring one source location or the other tended to aggregate within the cortex, suggestive of modular organization. The spatial acuity of stream segregation was as narrow as ∼10°, markedly sharper than the broad spatial tuning for single sources that is well known in the literature. Spatial sensitivity was sharpest among neurons having high characteristic frequencies. Neural stream segregation was predicted well by a parameter-free model that incorporated single-source spatial sensitivity and a measured forward-suppression term. We found that the forward suppression was not due to post discharge adaptation in the cortex and, therefore, must have arisen in the subcortical pathway or at the level of thalamocortical synapses. A linear-classifier analysis of single-neuron responses to rhythmic stimuli like those used in our psychophysical study yielded thresholds overlapping those of human listeners. Overall, the results indicate that the ascending auditory system does the work of segregating auditory streams, bringing them to discrete modules in the cortex for selection by top-down processes. PMID:23825404

  20. Auditory agnosia and auditory spatial deficits following left hemispheric lesions: evidence for distinct processing pathways.

    PubMed

    Clarke, S; Bellmann, A; Meuli, R A; Assal, G; Steck, A J

    2000-01-01

    Auditory recognition and auditory spatial functions were studied in four patients with circumscribed left hemispheric lesions. Patient FD was severely deficient in recognition of environmental sounds but normal in auditory localisation and auditory motion perception. The lesion included the left superior, middle and inferior temporal gyri and lateral auditory areas (as identified in previous anatomical studies), but spared Heschl's gyrus, the acoustic radiation and the thalamus. Patient SD had the same profile as FD, with deficient recognition of environmental sounds but normal auditory localisation and motion perception. The lesion comprised the postero-inferior part of the frontal convexity and the anterior third of the temporal lobe; data from non-human primates indicate that the latter are interconnected with lateral auditory areas. Patient MA was deficient in recognition of environmental sounds, auditory localisation and auditory motion perception, confirming that auditory spatial functions can be disturbed by left unilateral damage; the lesion involved the supratemporal region as well as the temporal, postero-inferior frontal and antero-inferior parietal convexities. Patient CZ was severely deficient in auditory motion perception and partially deficient in auditory localisation, but normal in recognition of environmental sounds; the lesion involved large parts of the parieto-frontal convexity and the supratemporal region. We propose that auditory information is processed in the human auditory cortex along two distinct pathways, one lateral devoted to auditory recognition and one medial and posterior devoted to auditory spatial functions.

  1. Multichannel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Erbe, T.; Wenzel, E. M. (Principal Investigator)

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  2. Multichannel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Erbe, T.; Wenzel, E. M. (Principal Investigator)

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  3. Multichannel Spatial Auditory Display for Speed Communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Erbe, Tom

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplifiedhead-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degree azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degree azimuth positions.

  4. Multichannel spatial auditory display for speech communications.

    PubMed

    Begault, D R; Erbe, T

    1994-10-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  5. Impairment of auditory spatial localization in congenitally blind human subjects.

    PubMed

    Gori, Monica; Sandini, Giulio; Martinoli, Cristina; Burr, David C

    2014-01-01

    Several studies have demonstrated enhanced auditory processing in the blind, suggesting that they compensate their visual impairment in part with greater sensitivity of the other senses. However, several physiological studies show that early visual deprivation can impact negatively on auditory spatial localization. Here we report for the first time severely impaired auditory localization in the congenitally blind: thresholds for spatially bisecting three consecutive, spatially-distributed sound sources were seriously compromised, on average 4.2-fold typical thresholds, and half performing at random. In agreement with previous studies, these subjects showed no deficits on simpler auditory spatial tasks or with auditory temporal bisection, suggesting that the encoding of Euclidean auditory relationships is specifically compromised in the congenitally blind. It points to the importance of visual experience in the construction and calibration of auditory spatial maps, with implications for rehabilitation strategies for the congenitally blind.

  6. Sonic morphology: Aesthetic dimensional auditory spatial awareness

    NASA Astrophysics Data System (ADS)

    Whitehouse, Martha M.

    The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.

  7. Auditory spatial processing in Alzheimer’s disease

    PubMed Central

    Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer

  8. Auditory spatial attention using interaural time differences.

    PubMed

    Sach, A J; Hill, N I; Bailey, P J

    2000-04-01

    Previous probe-signal studies of auditory spatial attention have shown faster responses to sounds at an expected versus an unexpected location, making no distinction between the use of interaural time difference (ITD) cues and interaural-level difference cues. In 5 experiments, performance on a same-different spatial discrimination task was used in place of the reaction time metric, and sounds, presented over headphones, were lateralized only by an ITD. In all experiments, performance was better for signals lateralized on the expected side of the head, supporting the conclusion that ITDs can be used as a basis for covert orienting. The performance advantage generalized to all sounds within the spatial focus and was not dissipated by a trial-by-trial rove in frequency or by a rove in spectral profile. Successful use by the listeners of a cross-modal, centrally positioned visual cue provided evidence for top-down attentional control.

  9. Auditory Learning. Dimensions in Early Learning Series.

    ERIC Educational Resources Information Center

    Zigmond, Naomi K.; Cicci, Regina

    The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…

  10. Auditory Learning. Dimensions in Early Learning Series.

    ERIC Educational Resources Information Center

    Zigmond, Naomi K.; Cicci, Regina

    The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…

  11. Perceptual learning in the developing auditory cortex.

    PubMed

    Bao, Shaowen

    2015-03-01

    A hallmark of the developing auditory cortex is the heightened plasticity in the critical period, during which acoustic inputs can indelibly alter cortical function. However, not all sounds in the natural acoustic environment are ethologically relevant. How does the auditory system resolve relevant sounds from the acoustic environment in such an early developmental stage when most associative learning mechanisms are not yet fully functional? What can the auditory system learn from one of the most important classes of sounds, animal vocalizations? How does naturalistic acoustic experience shape cortical sound representation and perception? To answer these questions, we need to consider an unusual strategy, statistical learning, where what the system needs to learn is embedded in the sensory input. Here, I will review recent findings on how certain statistical structures of natural animal vocalizations shape auditory cortical acoustic representations, and how cortical plasticity may underlie learned categorical sound perception. These results will be discussed in the context of human speech perception.

  12. Auditory spatial localization: Developmental delay in children with visual impairments.

    PubMed

    Cappagli, Giulia; Gori, Monica

    2016-01-01

    For individuals with visual impairments, auditory spatial localization is one of the most important features to navigate in the environment. Many works suggest that blind adults show similar or even enhanced performance for localization of auditory cues compared to sighted adults (Collignon, Voss, Lassonde, & Lepore, 2009). To date, the investigation of auditory spatial localization in children with visual impairments has provided contrasting results. Here we report, for the first time, that contrary to visually impaired adults, children with low vision or total blindness show a significant impairment in the localization of static sounds. These results suggest that simple auditory spatial tasks are compromised in children, and that this capacity recovers over time.

  13. The auditory brainstem is a barometer of rapid auditory learning.

    PubMed

    Skoe, E; Krizman, J; Spitzer, E; Kraus, N

    2013-07-23

    To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. Auditory and visual spatial impression: Recent studies of three auditoria

    NASA Astrophysics Data System (ADS)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  15. Covert Auditory Spatial Orienting: An Evaluation of the Spatial Relevance Hypothesis

    ERIC Educational Resources Information Center

    Roberts, Katherine L.; Summerfield, A. Quentin; Hall, Deborah A.

    2009-01-01

    The spatial relevance hypothesis (J. J. McDonald & L. M. Ward, 1999) proposes that covert auditory spatial orienting can only be beneficial to auditory processing when task stimuli are encoded spatially. We present a series of experiments that evaluate 2 key aspects of the hypothesis: (a) that "reflexive activation of location-sensitive neurons is…

  16. Negative emotion provides cues for orienting auditory spatial attention

    PubMed Central

    Asutay, Erkin; Västfjäll, Daniel

    2015-01-01

    The auditory stimuli provide information about the objects and events around us. They can also carry biologically significant emotional information (such as unseen dangers and conspecific vocalizations), which provides cues for allocation of attention and mental resources. Here, we investigated whether task-irrelevant auditory emotional information can provide cues for orientation of auditory spatial attention. We employed a covert spatial orienting task: the dot-probe task. In each trial, two task-irrelevant auditory cues were simultaneously presented at two separate locations (left–right or front–back). Environmental sounds were selected to form emotional vs. neutral, emotional vs. emotional, and neutral vs. neutral cue pairs. The participants’ task was to detect the location of an acoustic target that was presented immediately after the task-irrelevant auditory cues. The target was presented at the same location as one of the auditory cues. The results indicated that participants were significantly faster to locate the target when it replaced the negative cue compared to when it replaced the neutral cue. The positive cues did not produce a clear attentional bias. Further, same valence pairs (emotional–emotional or neutral–neutral) did not modulate reaction times due to a lack of spatial attention capture by one cue in the pair. Taken together, the results indicate that negative affect can provide cues for the orientation of spatial attention in the auditory domain. PMID:26029149

  17. From ear to body: the auditory-motor loop in spatial cognition

    PubMed Central

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space. PMID:25249933

  18. Auditory and motor imagery modulate learning in music performance.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  19. Auditory and motor imagery modulate learning in music performance

    PubMed Central

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  20. Spatial processing in the auditory cortex of the macaque monkey

    NASA Astrophysics Data System (ADS)

    Recanzone, Gregg H.

    2000-10-01

    The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.

  1. Call sign intelligibility improvement using a spatial auditory display

    NASA Astrophysics Data System (ADS)

    Begault, Durand R.

    1993-04-01

    A spatial auditory display was used to convolve speech stimuli, consisting of 130 different call signs used in the communications protocol of NASA's John F. Kennedy Space Center, to different virtual auditory positions. An adaptive staircase method was used to determine intelligibility levels of the signal against diotic speech babble, with spatial positions at 30 deg azimuth increments. Non-individualized, minimum-phase approximations of head-related transfer functions were used. The results showed a maximal intelligibility improvement of about 6 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  2. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1993-01-01

    A spatial auditory display was used to convolve speech stimuli, consisting of 130 different call signs used in the communications protocol of NASA's John F. Kennedy Space Center, to different virtual auditory positions. An adaptive staircase method was used to determine intelligibility levels of the signal against diotic speech babble, with spatial positions at 30 deg azimuth increments. Non-individualized, minimum-phase approximations of head-related transfer functions were used. The results showed a maximal intelligibility improvement of about 6 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  3. Early auditory enrichment with music enhances auditory discrimination learning and alters NR2B protein expression in rat auditory cortex.

    PubMed

    Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde

    2009-01-03

    Previous studies have shown that the functional development of auditory system is substantially influenced by the structure of environmental acoustic inputs in early life. In our present study, we investigated the effects of early auditory enrichment with music on rat auditory discrimination learning. We found that early auditory enrichment with music from postnatal day (PND) 14 enhanced learning ability in auditory signal-detection task and in sound duration-discrimination task. In parallel, a significant increase was noted in NMDA receptor subunit NR2B protein expression in the auditory cortex. Furthermore, we found that auditory enrichment with music starting from PND 28 or 56 did not influence NR2B expression in the auditory cortex. No difference was found in the NR2B expression in the inferior colliculus (IC) between music-exposed and normal rats, regardless of when the auditory enrichment with music was initiated. Our findings suggest that early auditory enrichment with music influences NMDA-mediated neural plasticity, which results in enhanced auditory discrimination learning.

  4. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1994-01-01

    A spatial auditory display was designed for separating the multiple communication channels usually heard over one ear to different virtual auditory positions. The single 19 foot rack mount device utilizes digital filtering algorithms to separate up to four communication channels. The filters use four different binaural transfer functions, synthesized from actual outer ear measurements, to impose localization cues on the incoming sound. Hardware design features include 'fail-safe' operation in the case of power loss, and microphone/headset interfaces to the mobile launch communication system in use at KSC. An experiment designed to verify the intelligibility advantage of the display used 130 different call signs taken from the communications protocol used at NASA KSC. A 6 to 7 dB intelligibility advantage was found when multiple channels were spatially displayed, compared to monaural listening. The findings suggest that the use of a spatial auditory display could enhance both occupational and operational safety and efficiency of NASA operations.

  5. Auditory Discrimination Learning: Role of Working Memory

    PubMed Central

    Zhang, Yu-Xuan; Moore, David R.; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal

    2016-01-01

    Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience. PMID:26799068

  6. Development of auditory localization accuracy and auditory spatial discrimination in children and adolescents.

    PubMed

    Kühnle, S; Ludwig, A A; Meuret, S; Küttner, C; Witte, C; Scholbach, J; Fuchs, M; Rübsamen, R

    2013-01-01

    The present study investigated the development of two parameters of spatial acoustic perception in children and adolescents with normal hearing, aged 6-18 years. Auditory localization accuracy was quantified by means of a sound source identification task and auditory spatial discrimination acuity by measuring minimum audible angles (MAA). Both low- and high-frequency noise bursts were employed in the tests, thereby separately addressing auditory processing based on interaural time and intensity differences. Setup consisted of 47 loudspeakers mounted in the frontal azimuthal hemifield, ranging from 90° left to 90° right (-90°, +90°). Target signals were presented from 8 loudspeaker positions in the left and right hemifields (±4°, ±30°, ±60° and ±90°). Localization accuracy and spatial discrimination acuity showed different developmental courses. Localization accuracy remained stable from the age of 6 onwards. In contrast, MAA thresholds and interindividual variability of spatial discrimination decreased significantly with increasing age. Across all age groups, localization was most accurate and MAA thresholds were lower for frontal than for lateral sound sources, and for low-frequency compared to high-frequency noise bursts. The study also shows better performance in spatial hearing based on interaural time differences rather than on intensity differences throughout development. These findings confirm that specific aspects of central auditory processing show continuous development during childhood up to adolescence.

  7. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes.

    PubMed

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  8. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    PubMed Central

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  9. The role of auditory feedback in vocal learning and maintenance

    PubMed Central

    Tschida, Katherine; Mooney, Richard

    2011-01-01

    Auditory experience is critical for the acquisition and maintenance of learned vocalizations in both humans and songbirds. Despite the central role of auditory feedback in vocal learning and maintenance, where and how auditory feedback affects neural circuits important to vocal control remain poorly understood. Recent studies of singing birds have uncovered neural mechanisms by which feedback perturbations affect vocal plasticity and also have identified feedback-sensitive neurons at or near sites of auditory and vocal motor interaction. Additionally, recent studies in marmosets have underscored that even in the absence of vocal learning, vocalization remains flexible in the face of changing acoustical environments, pointing to rapid interactions between auditory and vocal motor systems. Finally, recent studies show that a juvenile songbird’s initial auditory experience of a song model has long-lasting effects on sensorimotor neurons important to vocalization, shedding light on how auditory memories and feedback interact to guide vocal learning. PMID:22137567

  10. Integrated processing of spatial cues in human auditory cortex.

    PubMed

    Salminen, Nelli H; Takanen, Marko; Santala, Olli; Lamminsalo, Jarkko; Altoè, Alessandro; Pulkki, Ville

    2015-09-01

    Human sound source localization relies on acoustical cues, most importantly, the interaural differences in time and level (ITD and ILD). For reaching a unified representation of auditory space the auditory nervous system needs to combine the information provided by these two cues. In search for such a unified representation, we conducted a magnetoencephalography (MEG) experiment that took advantage of the location-specific adaptation of the auditory cortical N1 response. In general, the attenuation caused by a preceding adaptor sound to the response elicited by a probe depends on their spatial arrangement: if the two sounds coincide, adaptation is stronger than when the locations differ. Here, we presented adaptor-probe pairs that contained different localization cues, for instance, adaptors with ITD and probes with ILD. We found that the adaptation of the N1 amplitude was location-specific across localization cues. This result can be explained by the existence of auditory cortical neurons that are sensitive to sound source location independent on which cue, ITD or ILD, provides the location information. Such neurons would form a cue-independent, unified representation of auditory space in human auditory cortex.

  11. Emergence of Spatial Stream Segregation in the Ascending Auditory Pathway

    PubMed Central

    Yao, Justin D.; Bremen, Peter

    2015-01-01

    Stream segregation enables a listener to disentangle multiple competing sequences of sounds. A recent study from our laboratory demonstrated that cortical neurons in anesthetized cats exhibit spatial stream segregation (SSS) by synchronizing preferentially to one of two sequences of noise bursts that alternate between two source locations. Here, we examine the emergence of SSS along the ascending auditory pathway. Extracellular recordings were made in anesthetized rats from the inferior colliculus (IC), the nucleus of the brachium of the IC (BIN), the medial geniculate body (MGB), and the primary auditory cortex (A1). Stimuli consisted of interleaved sequences of broadband noise bursts that alternated between two source locations. At stimulus presentation rates of 5 and 10 bursts per second, at which human listeners report robust SSS, neural SSS is weak in the central nucleus of the IC (ICC), it appears in the nucleus of the brachium of the IC (BIN) and in approximately two-thirds of neurons in the ventral MGB (MGBv), and is prominent throughout A1. The enhancement of SSS at the cortical level reflects both increased spatial sensitivity and increased forward suppression. We demonstrate that forward suppression in A1 does not result from synaptic inhibition at the cortical level. Instead, forward suppression might reflect synaptic depression in the thalamocortical projection. Together, our findings indicate that auditory streams are increasingly segregated along the ascending auditory pathway as distinct mutually synchronized neural populations. SIGNIFICANCE STATEMENT Listeners are capable of disentangling multiple competing sequences of sounds that originate from distinct sources. This stream segregation is aided by differences in spatial location between the sources. A possible substrate of spatial stream segregation (SSS) has been described in the auditory cortex, but the mechanisms leading to those cortical responses are unknown. Here, we investigated SSS in

  12. Head-centred meridian effect on auditory spatial attention orienting.

    PubMed

    Ferlazzo, Fabio; Couyoumdjian, Messandro; Padovani, Tullia; Belardinelli, Marta Olivetti

    2002-07-01

    Six experiments examined the issue of whether one single system or separate systems underlie visual and auditory orienting of spatial attention. When auditory targets were used, reaction times were slower on trials in which cued and target locations were at opposite sides of the vertical head-centred meridian than on trials in which cued and target locations were at opposite sides of the vertical visual meridian or were not separated by any meridian. The head-centred meridian effect for auditory stimuli was apparent when targets were cued by either visual (Experiments 2, 3, and 6) or auditory cues (Experiment 5). Also, the head-centred meridian effect was found when targets were delivered either through headphones (Experiments 2, 3, and 5) or external loud-speakers (Experiment 6). Conversely, participants showed a visual meridian effect when they were required to respond to visual targets (Experiment 4). These results strongly suggest that auditory and visual spatial attention systems are indeed separate, as far as endogenous orienting is concerned.

  13. Construct Validity of Auditory Verbal Learning Test.

    PubMed

    Can, Handan; Doğutepe, Elvin; Torun Yazıhan, Nakşidil; Korkman, Hamdi; Erdoğan Bakar, Emel

    2016-01-01

    Auditory Verbal Learning Test (AVLT) is frequently used in neuropsychology literature to comprehensively assess the memory. The test measures verbal learning as immediate and delayed free recall, recognition, and retroactive and proactive interference. Adaptation of AVLT to the Turkish society has been completed, whereas research and development studies are still underway. The purpose of the present study is to investigate the construct validity of the test in order to contribute to the research and development process. In line with this purpose, the research data were obtained from 78 healthy participants aged between 20 and 69. The exclusion criteria included neurological and/or psychiatric disorders as well as untreated auditory/visual disorders. AVLT was administered to participants individually by two trained psychologists. Principal component analysis that is used to investigate the components represented by the AVLT scores consisted of learning, free recall and recognition, in line with the construct of the test. Distractors were also added to these two components in structural equation model. Analyses were carried out on descriptive level to establish the relatioships between age, education, gender and AVLT scores. These findings, which are consistent with the literature indicating that memory is affected by the developmental process, suggest that learning/free recall, recognition, and distractor scores of the AVLT demonstrate a component pattern consistent with theoretical knowledge. This conclusion suggests that AVLT is a valid measurement test for the Turkish society.

  14. Use of auditory learning to manage listening problems in children.

    PubMed

    Moore, David R; Halliday, Lorna F; Amitay, Sygal

    2009-02-12

    This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers have debated what aspect of training contributed to the improvement and even whether the claimed improvements reflect primarily a retest effect on the skill measures. Key to understanding this research have been more circumscribed studies of the transfer of learning and the use of multiple control groups to examine auditory and non-auditory contributions to the learning. Significant auditory learning can occur during relatively brief periods of training. As children mature, their ability to train improves, but the relation between the duration of training, amount of learning and benefit remains unclear. Individual differences in initial performance and amount of subsequent learning advocate tailoring training to individual learners. The mechanisms of learning remain obscure, especially in children, but it appears that the development of cognitive skills is of at least equal importance to the refinement of sensory processing. Promotion of retention and transfer of learning are major goals for further research.

  15. Use of auditory learning to manage listening problems in children

    PubMed Central

    Moore, David R.; Halliday, Lorna F.; Amitay, Sygal

    2008-01-01

    This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers have debated what aspect of training contributed to the improvement and even whether the claimed improvements reflect primarily a retest effect on the skill measures. Key to understanding this research have been more circumscribed studies of the transfer of learning and the use of multiple control groups to examine auditory and non-auditory contributions to the learning. Significant auditory learning can occur during relatively brief periods of training. As children mature, their ability to train improves, but the relation between the duration of training, amount of learning and benefit remains unclear. Individual differences in initial performance and amount of subsequent learning advocate tailoring training to individual learners. The mechanisms of learning remain obscure, especially in children, but it appears that the development of cognitive skills is of at least equal importance to the refinement of sensory processing. Promotion of retention and transfer of learning are major goals for further research. PMID:18986969

  16. Multisensory Training Improves Auditory Spatial Processing following Bilateral Cochlear Implantation

    PubMed Central

    Isaiah, Amal; Vongpaisal, Tara; King, Andrew J.

    2014-01-01

    Cochlear implants (CIs) partially restore hearing to the deaf by directly stimulating the inner ear. In individuals fitted with CIs, lack of auditory experience due to loss of hearing before language acquisition can adversely impact outcomes. For example, adults with early-onset hearing loss generally do not integrate inputs from both ears effectively when fitted with bilateral CIs (BiCIs). Here, we used an animal model to investigate the effects of long-term deafness on auditory localization with BiCIs and approaches for promoting the use of binaural spatial cues. Ferrets were deafened either at the age of hearing onset or as adults. All animals were implanted in adulthood, either unilaterally or bilaterally, and were subsequently assessed for their ability to localize sound in the horizontal plane. The unilaterally implanted animals were unable to perform this task, regardless of the duration of deafness. Among animals with BiCIs, early-onset hearing loss was associated with poor auditory localization performance, compared with late-onset hearing loss. However, performance in the early-deafened group with BiCIs improved significantly after multisensory training with interleaved auditory and visual stimuli. We demonstrate a possible neural substrate for this by showing a training-induced improvement in the responsiveness of auditory cortical neurons and in their sensitivity to interaural level differences, the principal localization cue available to BiCI users. Importantly, our behavioral and physiological evidence demonstrates a facilitative role for vision in restoring auditory spatial processing following potential cross-modal reorganization. These findings support investigation of a similar training paradigm in human CI users. PMID:25122908

  17. Sensorimotor Learning Enhances Expectations During Auditory Perception.

    PubMed

    Mathias, Brian; Palmer, Caroline; Perrin, Fabien; Tillmann, Barbara

    2015-08-01

    Sounds that have been produced with one's own motor system tend to be remembered better than sounds that have only been perceived, suggesting a role of motor information in memory for auditory stimuli. To address potential contributions of the motor network to the recognition of previously produced sounds, we used event-related potential, electric current density, and behavioral measures to investigate memory for produced and perceived melodies. Musicians performed or listened to novel melodies, and then heard the melodies either in their original version or with single pitch alterations. Production learning enhanced subsequent recognition accuracy and increased amplitudes of N200, P300, and N400 responses to pitch alterations. Premotor and supplementary motor regions showed greater current density during the initial detection of alterations in previously produced melodies than in previously perceived melodies, associated with the N200. Primary motor cortex was more strongly engaged by alterations in previously produced melodies within the P300 and N400 timeframes. Motor memory traces may therefore interface with auditory pitch percepts in premotor regions as early as 200 ms following perceived pitch onsets. Outcomes suggest that auditory-motor interactions contribute to memory benefits conferred by production experience, and support a role of motor prediction mechanisms in the production effect. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. The Role of Auditory Cues in the Spatial Knowledge of Blind Individuals

    ERIC Educational Resources Information Center

    Papadopoulos, Konstantinos; Papadimitriou, Kimon; Koutsoklenis, Athanasios

    2012-01-01

    The study presented here sought to explore the role of auditory cues in the spatial knowledge of blind individuals by examining the relation between the perceived auditory cues and the landscape of a given area and by investigating how blind individuals use auditory cues to create cognitive maps. The findings reveal that several auditory cues…

  19. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    NASA Astrophysics Data System (ADS)

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-11-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli.

  20. Spatial organization of tettigoniid auditory receptors: insights from neuronal tracing.

    PubMed

    Strauß, Johannes; Lehmann, Gerlind U C; Lehmann, Arne W; Lakes-Harlan, Reinhard

    2012-11-01

    The auditory sense organ of Tettigoniidae (Insecta, Orthoptera) is located in the foreleg tibia and consists of scolopidial sensilla which form a row termed crista acustica. The crista acustica is associated with the tympana and the auditory trachea. This ear is a highly ordered, tonotopic sensory system. As the neuroanatomy of the crista acustica has been documented for several species, the most distal somata and dendrites of receptor neurons have occasionally been described as forming an alternating or double row. We investigate the spatial arrangement of receptor cell bodies and dendrites by retrograde tracing with cobalt chloride solution. In six tettigoniid species studied, distal receptor neurons are consistently arranged in double-rows of somata rather than a linear sequence. This arrangement of neurons is shown to affect 30-50% of the overall auditory receptors. No strict correlation of somata positions between the anterio-posterior and dorso-ventral axis was evident within the distal crista acustica. Dendrites of distal receptors occasionally also occur in a double row or are even massed without clear order. Thus, a substantial part of auditory receptors can deviate from a strictly straight organization into a more complex morphology. The linear organization of dendrites is not a morphological criterion that allows hearing organs to be distinguished from nonhearing sense organs serially homologous to ears in all species. Both the crowded arrangement of receptor somata and dendrites may result from functional constraints relating to frequency discrimination, or from developmental constraints of auditory morphogenesis in postembryonic development.

  1. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    PubMed Central

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-01-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli. PMID:27853290

  2. Hand proximity facilitates spatial discrimination of auditory tones

    PubMed Central

    Tseng, Philip; Yu, Jiaxin; Tzeng, Ovid J. L.; Hung, Daisy L.; Juan, Chi-Hung

    2014-01-01

    The effect of hand proximity on vision and visual attention has been well documented. In this study we tested whether such effect(s) would also be present in the auditory modality. With hands placed either near or away from the audio sources, participants performed an auditory-spatial discrimination (Experiment 1: left or right side), pitch discrimination (Experiment 2: high, med, or low tone), and spatial-plus-pitch (Experiment 3: left or right; high, med, or low) discrimination task. In Experiment 1, when hands were away from the audio source, participants consistently responded faster with their right hand regardless of stimulus location. This right hand advantage, however, disappeared in the hands-near condition because of a significant improvement in left hand's reaction time (RT). No effect of hand proximity was found in Experiments 2 or 3, where a choice RT task requiring pitch discrimination was used. Together, these results that the perceptual and attentional effect of hand proximity is not limited to one specific modality, but applicable to the entire “space” near the hands, including stimuli of different modality (at least visual and auditory) within that space. While these findings provide evidence from auditory attention that supports the multimodal account originally raised by Reed et al. (2006), we also discuss the possibility of a dual mechanism hypothesis to reconcile findings from the multimodal and magno/parvocellular account. PMID:24966839

  3. Auditory Highlighting as a Strategy for Improving Listening Comprehension. Auditory Learning Monograph Series 2.

    ERIC Educational Resources Information Center

    Fleming, James W.

    Fifty-eight students (in grades 5 and 6) of average or near-average intelligence (who were reading 2 or more years below their normal expected level and who learned best through the auditory modality) took part in a study to evaluate the following areas: the effectiveness of two auditory highlighting procedures for increasing listening…

  4. Influence of auditory modeling on learning a swimming skill.

    PubMed

    Wang, Lin; Hart, Melanie A

    2005-06-01

    Auditory modeling has been an effective method of learning a new skill in laboratory settings; however, research examining the effectiveness of auditory modeling in a real world task is limited. Thus, the purpose of this study was to examine the effectiveness of auditory modeling on the learning of a swimming skill, specifically the butterfly stroke. Participants were 37 male college students enrolled in two swimming classes. The classes were randomly assigned as the control group, i.e., the standard swimming curriculum for the butterfly stroke including demonstration, verbal instructions, and practice, and the auditory modeling group, i.e., standard swimming curriculum for the butterfly stroke plus auditory modeling. Quantitative and qualitative analyses indicate that auditory modeling is an effective method for enhancing the learning of this real world motor skill.

  5. Natural auditory scene statistics shapes human spatial hearing.

    PubMed

    Parise, Cesare V; Knorre, Katharina; Ernst, Marc O

    2014-04-22

    Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing.

  6. Natural auditory scene statistics shapes human spatial hearing

    PubMed Central

    Parise, Cesare V.; Knorre, Katharina; Ernst, Marc O.

    2014-01-01

    Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing. PMID:24711409

  7. Spatial auditory attention is modulated by tactile priming.

    PubMed

    Menning, Hans; Ackermann, Hermann; Hertrich, Ingo; Mathiak, Klaus

    2005-07-01

    Previous studies have shown that cross-modal processing affects perception at a variety of neuronal levels. In this study, event-related brain responses were recorded via whole-head magnetoencephalography (MEG). Spatial auditory attention was directed via tactile pre-cues (primes) to one of four locations in the peripersonal space (left and right hand versus face). Auditory stimuli were white noise bursts, convoluted with head-related transfer functions, which ensured spatial perception of the four locations. Tactile primes (200-300 ms prior to acoustic onset) were applied randomly to one of these locations. Attentional load was controlled by three different visual distraction tasks. The auditory P50m (about 50 ms after stimulus onset) showed a significant "proximity" effect (larger responses to face stimulation as well as a "contralaterality" effect between side of stimulation and hemisphere). The tactile primes essentially reduced both the P50m and N100m components. However, facial tactile pre-stimulation yielded an enhanced ipsilateral N100m. These results show that earlier responses are mainly governed by exogenous stimulus properties whereas cross-sensory interaction is spatially selective at a later (endogenous) processing stage.

  8. Does visual experience influence the spatial distribution of auditory attention?

    PubMed

    Lerens, Elodie; Renier, Laurent

    2014-02-01

    Sighted individuals are less accurate and slower to localize sounds coming from the peripheral space than sounds coming from the frontal space. This specific bias in favour of the frontal auditory space seems reduced in early blind individuals, who are particularly better than sighted individuals at localizing sounds coming from the peripheral space. Currently, it is not clear to what extent this bias in the auditory space is a general phenomenon or if it applies only to spatial processing (i.e. sound localization). In our approach we compared the performance of early blind participants with that of sighted subjects during a frequency discrimination task with sounds originating either from frontal or peripheral locations. Results showed that early blind participants discriminated faster than sighted subjects both peripheral and frontal sounds. In addition, sighted subjects were faster at discriminating frontal sounds than peripheral ones, whereas early blind participants showed equal discrimination speed for frontal and peripheral sounds. We conclude that the spatial bias observed in sighted subjects reflects an unbalance in the spatial distribution of auditory attention resources that is induced by visual experience. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Auditory feedback in error-based learning of motor regularity.

    PubMed

    van Vugt, Floris T; Tillmann, Barbara

    2015-05-05

    Music and speech are skills that require high temporal precision of motor output. A key question is how humans achieve this timing precision given the poor temporal resolution of somatosensory feedback, which is classically considered to drive motor learning. We hypothesise that auditory feedback critically contributes to learn timing, and that, similarly to visuo-spatial learning models, learning proceeds by correcting a proportion of perceived timing errors. Thirty-six participants learned to tap a sequence regularly in time. For participants in the synchronous-sound group, a tone was presented simultaneously with every keystroke. For the jittered-sound group, the tone was presented after a random delay of 10-190 ms following the keystroke, thus degrading the temporal information that the sound provided about the movement. For the mute group, no keystroke-triggered sound was presented. In line with the model predictions, participants in the synchronous-sound group were able to improve tapping regularity, whereas the jittered-sound and mute group were not. The improved tapping regularity of the synchronous-sound group also transferred to a novel sequence and was maintained when sound was subsequently removed. The present findings provide evidence that humans engage in auditory feedback error-based learning to improve movement quality (here reduce variability in sequence tapping). We thus elucidate the mechanism by which high temporal precision of movement can be achieved through sound in a way that may not be possible with less temporally precise somatosensory modalities. Furthermore, the finding that sound-supported learning generalises to novel sequences suggests potential rehabilitation applications.

  10. Learning strategy determines auditory cortical plasticity

    PubMed Central

    Berlau, Kasia M.; Weinberger, Norman M.

    2013-01-01

    Learning modifies the primary auditory cortex (A1) to emphasize the processing and representation of behaviorally relevant sounds. However, the factors that determine cortical plasticity are poorly understood. While the type and amount of learning are assumed to be important, the actual strategies used to solve learning problems might be critical. To investigate this possibility, we trained two groups of adult male Sprague–Dawley rats to bar-press (BP) for water contingent on the presence of a 5.0 kHz tone using two different strategies: BP during tone presence or BP from tone-onset until receiving an error signal after tone cessation. Both groups achieved the same high levels of correct performance and both groups revealed equivalent learning of absolute frequency during training. Post-training terminal “mapping” of A1 showed no change in representational area of the tone signal frequency but revealed other substantial cue-specific plasticity that developed only in the tone-onset-to-error strategy group. Threshold was decreased ~10 dB and tuning bandwidth was narrowed by ~0.7 octaves. As sound onsets have greater perceptual weighting and cortical discharge efficacy than continual sound presence, the induction of specific learning-induced cortical plasticity may depend on the use of learning strategies that best exploit cortical proclivities. The present results also suggest a general principle for the induction and storage of plasticity in learning, viz., that the representation of specific acquired information may be selected by neurons according to a match between behaviorally selected stimulus features and circuit/network response properties. PMID:17707663

  11. Neural circuits underlying adaptation and learning in the perception of auditory space

    PubMed Central

    King, Andrew J.; Dahmen, Johannes C.; Keating, Peter; Leach, Nicholas D.; Nodal, Fernando R.; Bajo, Victoria M.

    2011-01-01

    Sound localization mechanisms are particularly plastic during development, when the monaural and binaural acoustic cues that form the basis for spatial hearing change in value as the body grows. Recent studies have shown that the mature brain retains a surprising capacity to relearn to localize sound in the presence of substantially altered auditory spatial cues. In addition to the long-lasting changes that result from learning, behavioral and electrophysiological studies have demonstrated that auditory spatial processing can undergo rapid adjustments in response to changes in the statistics of recent stimulation, which help to maintain sensitivity over the range where most stimulus values occur. Through a combination of recording studies and methods for selectively manipulating the activity of specific neuronal populations, progress is now being made in identifying the cortical and subcortical circuits in the brain that are responsible for the dynamic coding of auditory spatial information. PMID:21414354

  12. Multi-channel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand; Erbe, Tom

    1993-01-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  13. Multi-channel spatial auditory display for speech communications

    NASA Astrophysics Data System (ADS)

    Begault, Durand; Erbe, Tom

    1993-10-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  14. Multi-channel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand; Erbe, Tom

    1993-01-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  15. Quadri-stability of a spatially ambiguous auditory illusion

    PubMed Central

    Bainbridge, Constance M.; Bainbridge, Wilma A.; Oliva, Aude

    2014-01-01

    In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front vs. behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or “bouncing” to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual awareness. PMID

  16. The effect of real-time auditory feedback on learning new characters.

    PubMed

    Danna, Jérémy; Fontaine, Maureen; Paz-Villagrán, Vietminh; Gondre, Charles; Thoret, Etienne; Aramaki, Mitsuko; Kronland-Martinet, Richard; Ystad, Sølvi; Velay, Jean-Luc

    2015-10-01

    The present study investigated the effect of handwriting sonification on graphomotor learning. Thirty-two adults, distributed in two groups, learned four new characters with their non-dominant hand. The experimental design included a pre-test, a training session, and two post-tests, one just after the training sessions and another 24h later. Two characters were learned with and two without real-time auditory feedback (FB). The first group first learned the two non-sonified characters and then the two sonified characters whereas the reverse order was adopted for the second group. Results revealed that auditory FB improved the speed and fluency of handwriting movements but reduced, in the short-term only, the spatial accuracy of the trace. Transforming kinematic variables into sounds allows the writer to perceive his/her movement in addition to the written trace and this might facilitate handwriting learning. However, there were no differential effects of auditory FB, neither long-term nor short-term for the subjects who first learned the characters with auditory FB. We hypothesize that the positive effect on the handwriting kinematics was transferred to characters learned without FB. This transfer effect of the auditory FB is discussed in light of the Theory of Event Coding.

  17. Effects of sex and age on auditory spatial scene analysis.

    PubMed

    Lewald, Jörg; Hausmann, Markus

    2013-05-01

    Recently, it has been demonstrated that men outperform women in spatial analysis of complex auditory scenes (Zündorf et al., 2011). The present study investigated the relation between the effects of ageing and sex on the spatial segregation of concurrent sounds in younger and middle-aged adults. The experimental design allowed simultaneous presentation of target and distractor sound sources at different locations. The resulting spatial "pulling" effect (that is, the bias of target localization toward that of the distractor) was used as a measure of performance. The pulling effect was stronger in middle-aged than younger subjects, and female than male subjects. This indicates lower performance of the middle-aged women in the sensory and attentional mechanisms extracting spatial information about the acoustic event of interest from the auditory scene than both younger and male subjects. Moreover, age-specific differences were most prominent for conditions with targets in right hemispace and distractors in left hemispace, suggesting bilateral asymmetries underlying the effect of ageing.

  18. Auditory Processing Learning Disability, Suicidal Ideation, and Transformational Faith

    ERIC Educational Resources Information Center

    Bailey, Frank S.; Yocum, Russell G.

    2015-01-01

    The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…

  19. Auditory cortex involvement in emotional learning and memory.

    PubMed

    Grosso, A; Cambiaghi, M; Concina, G; Sacco, T; Sacchetti, B

    2015-07-23

    Emotional memories represent the core of human and animal life and drive future choices and behaviors. Early research involving brain lesion studies in animals lead to the idea that the auditory cortex participates in emotional learning by processing the sensory features of auditory stimuli paired with emotional consequences and by transmitting this information to the amygdala. Nevertheless, electrophysiological and imaging studies revealed that, following emotional experiences, the auditory cortex undergoes learning-induced changes that are highly specific, associative and long lasting. These studies suggested that the role played by the auditory cortex goes beyond stimulus elaboration and transmission. Here, we discuss three major perspectives created by these data. In particular, we analyze the possible roles of the auditory cortex in emotional learning, we examine the recruitment of the auditory cortex during early and late memory trace encoding, and finally we consider the functional interplay between the auditory cortex and subcortical nuclei, such as the amygdala, that process affective information. We conclude that, starting from the early phase of memory encoding, the auditory cortex has a more prominent role in emotional learning, through its connections with subcortical nuclei, than is typically acknowledged. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Human Central Auditory Plasticity Associated with Tone Sequence Learning

    ERIC Educational Resources Information Center

    Gottselig, Julie Marie; Brandeis, Daniel; Hofer-Tinguely, Gilberte; Borbely, Alexander A.; Achermann, Peter

    2004-01-01

    We investigated learning-related changes in amplitude, scalp topography, and source localization of the mismatch negativity (MMN), a neurophysiological response correlated with auditory discrimination ability. Participants (n = 32) underwent two EEG recordings while they watched silent films and ignored auditory stimuli. Stimuli were a standard…

  1. Auditory Processing Learning Disability, Suicidal Ideation, and Transformational Faith

    ERIC Educational Resources Information Center

    Bailey, Frank S.; Yocum, Russell G.

    2015-01-01

    The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…

  2. Level dependence of spatial processing in the primate auditory cortex

    PubMed Central

    Wang, Xiaoqin

    2012-01-01

    Sound localization in both humans and monkeys is tolerant to changes in sound levels. The underlying neural mechanism, however, is not well understood. This study reports the level dependence of individual neurons' spatial receptive fields (SRFs) in the primary auditory cortex (A1) and the adjacent caudal field in awake marmoset monkeys. We found that most neurons' excitatory SRF components were spatially confined in response to broadband noise stimuli delivered from the upper frontal sound field. Approximately half the recorded neurons exhibited little change in spatial tuning width over a ∼20-dB change in sound level, whereas the remaining neurons showed either expansion or contraction in their tuning widths. Increased sound levels did not alter the percent distribution of tuning width for neurons collected in either cortical field. The population-averaged responses remained tuned between 30- and 80-dB sound pressure levels for neuronal groups preferring contralateral, midline, and ipsilateral locations. We further investigated the spatial extent and level dependence of the suppressive component of SRFs using a pair of sequentially presented stimuli. Forward suppression was observed when the stimuli were delivered from “far” locations, distant to the excitatory center of an SRF. In contrast to spatially confined excitation, the strength of suppression typically increased with stimulus level at both the excitatory center and far regions of an SRF. These findings indicate that although the spatial tuning of individual neurons varied with stimulus levels, their ensemble responses were level tolerant. Widespread spatial suppression may play an important role in limiting the sizes of SRFs at high sound levels in the auditory cortex. PMID:22592309

  3. Analysis of auditory spatial receptive fields: An application of virtual auditory space technology

    NASA Astrophysics Data System (ADS)

    Takahashi, Terry T.; Keller, Clifford H.; Euston, David R.; Spezio, Michael L.

    2002-05-01

    Virtual auditory space technology, typically used to simulate acoustical environments, also allows one to vary one sound localization cue independently of others. VAST was used to determine the contributions of interaural time and level differences (ITD, ILD) to the spatial receptive fields (RFs) of neurons in the owl's midbrain. The presentation of noise filtered so that only ITD varied evoked a response along a vertical strip of virtual space, called the ITD-alone RF. Conversely, when ITD was fixed at the cell's optimum and the ILD spectrum of each location was presented, the cell responded along a horizontal strip, called the ILD-alone RF. The spatial RF was at the intersection of the ITD and ILD-alone RFs. The cell's ILD tuning across frequency, combined with individualized head-related transfer functions, was transformed into an ILD-alone RF that predicted half the variance in the measured one. This discrepancy was due partly to the poor response of the neurons to tones, and a new method of inferring frequency-specific ILD tuning from responses to noise explained about 75% of the variance. By understanding how spatial RFs are constructed, it is possible to infer the neural image of complex auditory scenes containing multiple sources and echoes. [Work supported by NIDCD.

  4. A lateralized auditory evoked potential elicited when auditory objects are defined by spatial motion.

    PubMed

    Butcher, Andrew; Govenlock, Stanley W; Tata, Matthew S

    2011-02-01

    Scene analysis involves the process of segmenting a field of overlapping objects from each other and from the background. It is a fundamental stage of perception in both vision and hearing. The auditory system encodes complex cues that allow listeners to find boundaries between sequential objects, even when no gap of silence exists between them. In this sense, object perception in hearing is similar to perceiving visual objects defined by isoluminant color, motion or binocular disparity. Motion is one such cue: when a moving sound abruptly disappears from one location and instantly reappears somewhere else, the listener perceives two sequential auditory objects. Smooth reversals of motion direction do not produce this segmentation. We investigated the brain electrical responses evoked by this spatial segmentation cue and compared them to the familiar auditory evoked potential elicited by sound onsets. Segmentation events evoke a pattern of negative and positive deflections that are unlike those evoked by onsets. We identified a negative component in the waveform - the Lateralized Object-Related Negativity - generated by the hemisphere contralateral to the side on which the new sound appears. The relationship between this component and similar components found in related paradigms is considered. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. Auditory-Perceptual Learning Improves Speech Motor Adaptation in Children

    PubMed Central

    Shiller, Douglas M.; Rochon, Marie-Lyne

    2015-01-01

    Auditory feedback plays an important role in children’s speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback, however it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5–7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children’s ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation. PMID:24842067

  6. Auditory-perceptual learning improves speech motor adaptation in children.

    PubMed

    Shiller, Douglas M; Rochon, Marie-Lyne

    2014-08-01

    Auditory feedback plays an important role in children's speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback; however, it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5- to 7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children's ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation.

  7. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning

    PubMed Central

    Strait, Dana L.; Kraus, Nina

    2013-01-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians’ subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model by which to study mechanisms of experience-dependent changes in auditory function in humans. PMID:23988583

  8. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning.

    PubMed

    Strait, Dana L; Kraus, Nina

    2014-02-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians' subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model in which to study mechanisms of experience-dependent changes in human auditory function. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. A Dominance Hierarchy of Auditory Spatial Cues in Barn Owls

    PubMed Central

    Witten, Ilana B.; Knudsen, Phyllis F.; Knudsen, Eric I.

    2010-01-01

    Background Barn owls integrate spatial information across frequency channels to localize sounds in space. Methodology/Principal Findings We presented barn owls with synchronous sounds that contained different bands of frequencies (3–5 kHz and 7–9 kHz) from different locations in space. When the owls were confronted with the conflicting localization cues from two synchronous sounds of equal level, their orienting responses were dominated by one of the sounds: they oriented toward the location of the low frequency sound when the sources were separated in azimuth; in contrast, they oriented toward the location of the high frequency sound when the sources were separated in elevation. We identified neural correlates of this behavioral effect in the optic tectum (OT, superior colliculus in mammals), which contains a map of auditory space and is involved in generating orienting movements to sounds. We found that low frequency cues dominate the representation of sound azimuth in the OT space map, whereas high frequency cues dominate the representation of sound elevation. Conclusions/Significance We argue that the dominance hierarchy of localization cues reflects several factors: 1) the relative amplitude of the sound providing the cue, 2) the resolution with which the auditory system measures the value of a cue, and 3) the spatial ambiguity in interpreting the cue. These same factors may contribute to the relative weighting of sound localization cues in other species, including humans. PMID:20442852

  10. A dominance hierarchy of auditory spatial cues in barn owls.

    PubMed

    Witten, Ilana B; Knudsen, Phyllis F; Knudsen, Eric I

    2010-04-28

    Barn owls integrate spatial information across frequency channels to localize sounds in space. We presented barn owls with synchronous sounds that contained different bands of frequencies (3-5 kHz and 7-9 kHz) from different locations in space. When the owls were confronted with the conflicting localization cues from two synchronous sounds of equal level, their orienting responses were dominated by one of the sounds: they oriented toward the location of the low frequency sound when the sources were separated in azimuth; in contrast, they oriented toward the location of the high frequency sound when the sources were separated in elevation. We identified neural correlates of this behavioral effect in the optic tectum (OT, superior colliculus in mammals), which contains a map of auditory space and is involved in generating orienting movements to sounds. We found that low frequency cues dominate the representation of sound azimuth in the OT space map, whereas high frequency cues dominate the representation of sound elevation. We argue that the dominance hierarchy of localization cues reflects several factors: 1) the relative amplitude of the sound providing the cue, 2) the resolution with which the auditory system measures the value of a cue, and 3) the spatial ambiguity in interpreting the cue. These same factors may contribute to the relative weighting of sound localization cues in other species, including humans.

  11. Auditory Discrimination of Normal and Learning Disabled Children: A Comparison of Their Performance on the Goldman-Fristoe-Woodcock Test of Auditory Discrimination and Wepman Auditory Discrimination Test.

    ERIC Educational Resources Information Center

    Houck, Cherry K.; And Others

    Examined was the performance of 18 normal and 20 learning disabled (LD) 8- to 9-year-old children on two competitive measures of auditory discrimination. Ss were administered the Wepman Auditory Discrimination Test (1974) and the Goldman, Fristoe, Woodcock Test of Auditory Discrimination (1970). Results suggested that little correlation exists…

  12. Sensory Substitution: The Spatial Updating of Auditory Scenes “Mimics” the Spatial Updating of Visual Scenes

    PubMed Central

    Pasqualotto, Achille; Esenkaya, Tayfun

    2016-01-01

    Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD). PMID:27148000

  13. Detection of Auditory Signals in Quiet and Noisy Backgrounds while Performing a Visuo-spatial Task

    PubMed Central

    Rawool, Vishakha W.

    2016-01-01

    Context: The ability to detect important auditory signals while performing visual tasks may be further compounded by background chatter. Thus, it is important to know how task performance may interact with background chatter to hinder signal detection. Aim: To examine any interactive effects of speech spectrum noise and task performance on the ability to detect signals. Settings and Design: The setting was a sound-treated booth. A repeated measures design was used. Materials and Methods: Auditory thresholds of 20 normal adults were determined at 0.5, 1, 2 and 4 kHz in the following conditions presented in a random order: (1) quiet with attention; (2) quiet with a visuo-spatial task or puzzle (distraction); (3) noise with attention and (4) noise with task. Statistical Analysis: Multivariate analyses of variance (MANOVA) with three repeated factors (quiet versus noise, visuo-spatial task versus no task, signal frequency). Results: MANOVA revealed significant main effects for noise and signal frequency and significant noise–frequency and task–frequency interactions. Distraction caused by performing the task worsened the thresholds for tones presented at the beginning of the experiment and had no effect on tones presented in the middle. At the end of the experiment, thresholds (4 kHz) were better while performing the task than those obtained without performing the task. These effects were similar across the quiet and noise conditions. Conclusion: Detection of auditory signals is difficult at the beginning of a distracting visuo-spatial task but over time, task learning and auditory training effects can nullify the effect of distraction and may improve detection of high frequency sounds. PMID:27991458

  14. [Auditory processing maturation in children with and without learning difficulties].

    PubMed

    Ivone, Ferreira Neves; Schochat, Eliane

    2005-01-01

    Auditory processing maturation in school children with and without learning difficulties. To verify response improvement with the increase in age of the auditory processing skills in school children with ages ranging from eight to ten years, with and without learning difficulties and to perform a comparative study. Eighty-nine children without learning complaints (Group 1) and 60 children with learning difficulties (Group II) were assessed. The used auditory processing tests were: Pediatric Speech Intelligibility (PSI), Speech in Noise, Dichotic Non-Verbal (DNV) and Staggered Spondaic Word (SSW). A better performance was observed for Group I between the ages of eight and ten in all of the used tests. However, the observed differences were statistically significant only for PSI and SSW. For Group II, a better performance was also observed with the increase in age, with statistically significant differences for all of the used tests. Comparing the results between Groups I and II, a better performance was verified for children with no learning difficulties, in the three age groups, in PSI, DNV and SSW. A statistically significant improvement was verified in the responses of the auditory processing with the increase in age, for the ages between eight and ten years, in children with and without learning difficulties. In the comparative study, it was verified that children with learning difficulties presented a lower performance in all of the used tests in the three age groups. This suggests, for this group, a delay in the maturation of the auditory processing skills.

  15. The role of auditory cortex in the spatial ventriloquism aftereffect.

    PubMed

    Zierul, Björn; Röder, Brigitte; Tempelmann, Claus; Bruns, Patrick; Noesselt, Toemme

    2017-09-06

    Cross-modal recalibration allows the brain to maintain coherent sensory representations of the world. Using functional magnetic resonance imaging (fMRI), the present study aimed at identifying the neural mechanisms underlying recalibration in an audiovisual ventriloquism aftereffect paradigm. Participants performed a unimodal sound localization task, before and after they were exposed to adaptation blocks, in which sounds were paired with spatially disparate visual stimuli offset by 14° to the right. Behavioral results showed a significant rightward shift in sound localization following adaptation, indicating a ventriloquism aftereffect. Regarding fMRI results, left and right planum temporale (lPT/rPT) were found to respond more to contralateral sounds than to central sounds at pretest. Contrasting posttest with pretest blocks revealed significantly enhanced fMRI-signals in space-sensitive lPT after adaptation, matching the behavioral rightward shift in sound localization. Moreover, a region-of-interest analysis in lPT/rPT revealed that the lPT activity correlated positively with the localization shift for right-side sounds, whereas rPT activity correlated negatively with the localization shift for left-side and central sounds. Finally, using functional connectivity analysis, we observed enhanced coupling of the lPT with left and right inferior parietal areas as well as left motor regions following adaptation and a decoupling of lPT/rPT with contralateral auditory cortex, which scaled with participants' degree of adaptation. Together, the fMRI results suggest that cross-modal spatial recalibration is accomplished by an adjustment of unisensory representations in low-level auditory cortex. Such persistent adjustments of low-level sensory representations seem to be mediated by the interplay with higher-level spatial representations in parietal cortex. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Supramodal Enhancement of Auditory Perceptual and Cognitive Learning by Video Game Playing

    PubMed Central

    Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R.; Amitay, Sygal

    2017-01-01

    Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training. PMID:28701989

  17. Supramodal Enhancement of Auditory Perceptual and Cognitive Learning by Video Game Playing.

    PubMed

    Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R; Amitay, Sygal

    2017-01-01

    Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.

  18. Processing of spatial sounds in human auditory cortex during visual, discrimination and 2-back tasks

    PubMed Central

    Rinne, Teemu; Ala-Salomäki, Heidi; Stecker, G. Christopher; Pätynen, Jukka; Lokki, Tapio

    2014-01-01

    Previous imaging studies on the brain mechanisms of spatial hearing have mainly focused on sounds varying in the horizontal plane. In this study, we compared activations in human auditory cortex (AC) and adjacent inferior parietal lobule (IPL) to sounds varying in horizontal location, distance, or space (i.e., different rooms). In order to investigate both stimulus-dependent and task-dependent activations, these sounds were presented during visual discrimination, auditory discrimination, and auditory 2-back memory tasks. Consistent with previous studies, activations in AC were modulated by the auditory tasks. During both auditory and visual tasks, activations in AC were stronger to sounds varying in horizontal location than along other feature dimensions. However, in IPL, this enhancement was detected only during auditory tasks. Based on these results, we argue that IPL is not primarily involved in stimulus-level spatial analysis but that it may represent such information for more general processing when relevant to an active auditory task. PMID:25120423

  19. Longitudinal auditory learning facilitates auditory cognition as revealed by microstate analysis.

    PubMed

    Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Matthes, Katarina L; Meyer, Martin

    2017-02-01

    The current study investigates cognitive processes as reflected in late auditory-evoked potentials as a function of longitudinal auditory learning. A normal hearing adult sample (n=15) performed an active oddball task at three consecutive time points (TPs) arranged at two week intervals, and during which EEG was recorded. The stimuli comprised of syllables consisting of a natural fricative (/sh/,/s/,/f/) embedded between two /a/ sounds, as well as morphed transitions of the two syllables that served as deviants. Perceptual and cognitive modulations as reflected in the onset and the mean global field power (GFP) of N2b- and P3b-related microstates across four weeks were investigated. We found that the onset of P3b-like microstates, but not N2b-like microstates decreased across TPs, more strongly for difficult deviants leading to similar onsets for difficult and easy stimuli after repeated exposure. The mean GFP of all N2b-like and P3b-like microstates increased more in spectrally strong deviants compared to weak deviants, leading to a distinctive activation for each stimulus after learning. Our results indicate that longitudinal training of auditory-related cognitive mechanisms such as stimulus categorization, attention and memory updating processes are an indispensable part of successful auditory learning. This suggests that future studies should focus on the potential benefits of cognitive processes in auditory training.

  20. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    PubMed

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-08-02

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Manipulation of a central auditory representation shapes learned vocal output

    PubMed Central

    Lei, Huimeng; Mooney, Richard

    2009-01-01

    Learned vocalizations depend on the ear’s ability to monitor and ultimately instruct the voice. Where is auditory feedback processed in the brain and how does it modify motor networks for learned vocalizations? Here we addressed these questions using singing-triggered microstimulation and chronic recording methods in the singing zebra finch, a small songbird that relies on auditory feedback to learn and maintain its species-typical vocalizations. Manipulating the singing-related activity of feedback-sensitive thalamic neurons subsequently triggered vocal plasticity, constraining the central pathway and functional mechanisms through which feedback-related information shapes vocalization. PMID:20152118

  2. Influence of Syllable Structure on L2 Auditory Word Learning

    ERIC Educational Resources Information Center

    Hamada, Megumi; Goya, Hideki

    2015-01-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…

  3. The Auditory Verbal Learning Test (Rey AVLT): An Arabic Version

    ERIC Educational Resources Information Center

    Sharoni, Varda; Natur, Nazeh

    2014-01-01

    The goals of this study were to adapt the Rey Auditory Verbal Learning Test (AVLT) into Arabic, to compare recall functioning among age groups (6:0 to 17:11), and to compare gender differences on various memory dimensions (immediate and delayed recall, learning rate, recognition, proactive interferences, and retroactive interferences). This…

  4. Influence of Syllable Structure on L2 Auditory Word Learning

    ERIC Educational Resources Information Center

    Hamada, Megumi; Goya, Hideki

    2015-01-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…

  5. Auditory Learning and Teaching of Hearing-Impaired Infants.

    ERIC Educational Resources Information Center

    Mischook, Muriel; Cole, Elizabeth

    1986-01-01

    The chapter examines audition and early intervention with hearing impaired infants, the normal development of audition, a model of auditory learning and teaching (involving discrimination, identification, comprehension, and detection), progression along the developmental sequence, natural interactions which aid learning, and parent role. (DB)

  6. Interdependence of spatial and temporal coding in the auditory midbrain.

    PubMed

    Koch, U; Grothe, B

    2000-04-01

    To date, most physiological studies that investigated binaural auditory processing have addressed the topic rather exclusively in the context of sound localization. However, there is strong psychophysical evidence that binaural processing serves more than only sound localization. This raises the question of how binaural processing of spatial cues interacts with cues important for feature detection. The temporal structure of a sound is one such feature important for sound recognition. As a first approach, we investigated the influence of binaural cues on temporal processing in the mammalian auditory system. Here, we present evidence that binaural cues, namely interaural intensity differences (IIDs), have profound effects on filter properties for stimulus periodicity of auditory midbrain neurons in the echolocating big brown bat, Eptesicus fuscus. Our data indicate that these effects are partially due to changes in strength and timing of binaural inhibitory inputs. We measured filter characteristics for the periodicity (modulation frequency) of sinusoidally frequency modulated sounds (SFM) under different binaural conditions. As criteria, we used 50% filter cutoff frequencies of modulation transfer functions based on discharge rate as well as synchronicity of discharge to the sound envelope. The binaural conditions were contralateral stimulation only, equal stimulation at both ears (IID = 0 dB), and more intense at the ipsilateral ear (IID = -20, -30 dB). In 32% of neurons, the range of modulation frequencies the neurons responded to changed considerably comparing monaural and binaural (IID =0) stimulation. Moreover, in approximately 50% of neurons the range of modulation frequencies was narrower when the ipsilateral ear was favored (IID = -20) compared with equal stimulation at both ears (IID = 0). In approximately 10% of the neurons synchronization differed when comparing different binaural cues. Blockade of the GABAergic or glycinergic inputs to the cells recorded

  7. Spatial Cues Provided by Sound Improve Postural Stabilization: Evidence of a Spatial Auditory Map?

    PubMed Central

    Gandemer, Lennie; Parseihian, Gaetan; Kronland-Martinet, Richard; Bourdin, Christophe

    2017-01-01

    It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize. PMID:28694770

  8. Perceptual Learning and Auditory Training in Cochlear Implant Recipients

    PubMed Central

    Fu, Qian-Jie; Galvin, John J.

    2007-01-01

    Learning electrically stimulated speech patterns can be a new and difficult experience for cochlear implant (CI) recipients. Recent studies have shown that most implant recipients at least partially adapt to these new patterns via passive, daily-listening experiences. Gradually introducing a speech processor parameter (eg, the degree of spectral mismatch) may provide for more complete and less stressful adaptation. Although the implant device restores hearing sensation and the continued use of the implant provides some degree of adaptation, active auditory rehabilitation may be necessary to maximize the benefit of implantation for CI recipients. Currently, there are scant resources for auditory rehabilitation for adult, postlingually deafened CI recipients. We recently developed a computer-assisted speech-training program to provide the means to conduct auditory rehabilitation at home. The training software targets important acoustic contrasts among speech stimuli, provides auditory and visual feedback, and incorporates progressive training techniques, thereby maintaining recipients’ interest during the auditory training exercises. Our recent studies demonstrate the effectiveness of targeted auditory training in improving CI recipients’ speech and music perception. Provided with an inexpensive and effective auditory training program, CI recipients may find the motivation and momentum to get the most from the implant device. PMID:17709574

  9. Switching auditory attention using spatial and non-spatial features recruits different cortical networks.

    PubMed

    Larson, Eric; Lee, Adrian K C

    2014-01-01

    Switching attention between different stimuli of interest based on particular task demands is important in many everyday settings. In audition in particular, switching attention between different speakers of interest that are talking concurrently is often necessary for effective communication. Recently, it has been shown by multiple studies that auditory selective attention suppresses the representation of unwanted streams in auditory cortical areas in favor of the target stream of interest. However, the neural processing that guides this selective attention process is not well understood. Here we investigated the cortical mechanisms involved in switching attention based on two different types of auditory features. By combining magneto- and electro-encephalography (M-EEG) with an anatomical MRI constraint, we examined the cortical dynamics involved in switching auditory attention based on either spatial or pitch features. We designed a paradigm where listeners were cued in the beginning of each trial to switch or maintain attention halfway through the presentation of concurrent target and masker streams. By allowing listeners time to switch during a gap in the continuous target and masker stimuli, we were able to isolate the mechanisms involved in endogenous, top-down attention switching. Our results show a double dissociation between the involvement of right temporoparietal junction (RTPJ) and the left inferior parietal supramarginal part (LIPSP) in tasks requiring listeners to switch attention based on space and pitch features, respectively, suggesting that switching attention based on these features involves at least partially separate processes or behavioral strategies.

  10. Using Spatial Manipulation to Examine Interactions between Visual and Auditory Encoding of Pitch and Time

    PubMed Central

    McLachlan, Neil M.; Greco, Loretta J.; Toner, Emily C.; Wilson, Sarah J.

    2010-01-01

    Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal) to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic) notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians. PMID:21833287

  11. AUDITORY DISCRIMINATION AND LEARNING--SOCIAL FACTORS.

    ERIC Educational Resources Information Center

    DEUTSCH, CYNTHIA P.

    EVIDENCE SUGGESTS READING ABILITY IS RELATED TO OTHER COMMUNICATION SKILLS SUCH AS LISTENING AND SPEAKING. DISTRUPTION IN THE PROCESS OF RECEIVING, ANALYZING, AND UTILIZING AUDITORY STIMULI MAY HAVE DELETERIOUS EFFECTS UPON A CHILD'S DEVELOPMENT OF READING SKILLS, ESPECIALLY IF THIS DISRUPTION OCCURS IN PRESCHOOL CHILDREN. THOSE GROWING UP IN…

  12. Late Maturation of Auditory Perceptual Learning

    ERIC Educational Resources Information Center

    Huyck, Julia Jones; Wright, Beverly A.

    2011-01-01

    Adults can improve their performance on many perceptual tasks with training, but when does the response to training become mature? To investigate this question, we trained 11-year-olds, 14-year-olds and adults on a basic auditory task (temporal-interval discrimination) using a multiple-session training regimen known to be effective for adults. The…

  13. Late Maturation of Auditory Perceptual Learning

    ERIC Educational Resources Information Center

    Huyck, Julia Jones; Wright, Beverly A.

    2011-01-01

    Adults can improve their performance on many perceptual tasks with training, but when does the response to training become mature? To investigate this question, we trained 11-year-olds, 14-year-olds and adults on a basic auditory task (temporal-interval discrimination) using a multiple-session training regimen known to be effective for adults. The…

  14. Rapid cortical dynamics associated with auditory spatial attention gradients

    PubMed Central

    Mock, Jeffrey R.; Seay, Michael J.; Charney, Danielle R.; Holmes, John L.; Golob, Edward J.

    2015-01-01

    Behavioral and EEG studies suggest spatial attention is allocated as a gradient in which processing benefits decrease away from an attended location. Yet the spatiotemporal dynamics of cortical processes that contribute to attentional gradients are unclear. We measured EEG while participants (n = 35) performed an auditory spatial attention task that required a button press to sounds at one target location on either the left or right. Distractor sounds were randomly presented at four non-target locations evenly spaced up to 180° from the target location. Attentional gradients were quantified by regressing ERP amplitudes elicited by distractors against their spatial location relative to the target. Independent component analysis was applied to each subject's scalp channel data, allowing isolation of distinct cortical sources. Results from scalp ERPs showed a tri-phasic response with gradient slope peaks at ~300 ms (frontal, positive), ~430 ms (posterior, negative), and a plateau starting at ~550 ms (frontal, positive). Corresponding to the first slope peak, a positive gradient was found within a central component when attending to both target locations and for two lateral frontal components when contralateral to the target location. Similarly, a central posterior component had a negative gradient that corresponded to the second slope peak regardless of target location. A right posterior component had both an ipsilateral followed by a contralateral gradient. Lateral posterior clusters also had decreases in α and β oscillatory power with a negative slope and contralateral tuning. Only the left posterior component (120–200 ms) corresponded to absolute sound location. The findings indicate a rapid, temporally-organized sequence of gradients thought to reflect interplay between frontal and parietal regions. We conclude these gradients support a target-based saliency map exhibiting aspects of both right-hemisphere dominance and opponent process models. PMID:26082679

  15. Rapid cortical dynamics associated with auditory spatial attention gradients.

    PubMed

    Mock, Jeffrey R; Seay, Michael J; Charney, Danielle R; Holmes, John L; Golob, Edward J

    2015-01-01

    Behavioral and EEG studies suggest spatial attention is allocated as a gradient in which processing benefits decrease away from an attended location. Yet the spatiotemporal dynamics of cortical processes that contribute to attentional gradients are unclear. We measured EEG while participants (n = 35) performed an auditory spatial attention task that required a button press to sounds at one target location on either the left or right. Distractor sounds were randomly presented at four non-target locations evenly spaced up to 180° from the target location. Attentional gradients were quantified by regressing ERP amplitudes elicited by distractors against their spatial location relative to the target. Independent component analysis was applied to each subject's scalp channel data, allowing isolation of distinct cortical sources. Results from scalp ERPs showed a tri-phasic response with gradient slope peaks at ~300 ms (frontal, positive), ~430 ms (posterior, negative), and a plateau starting at ~550 ms (frontal, positive). Corresponding to the first slope peak, a positive gradient was found within a central component when attending to both target locations and for two lateral frontal components when contralateral to the target location. Similarly, a central posterior component had a negative gradient that corresponded to the second slope peak regardless of target location. A right posterior component had both an ipsilateral followed by a contralateral gradient. Lateral posterior clusters also had decreases in α and β oscillatory power with a negative slope and contralateral tuning. Only the left posterior component (120-200 ms) corresponded to absolute sound location. The findings indicate a rapid, temporally-organized sequence of gradients thought to reflect interplay between frontal and parietal regions. We conclude these gradients support a target-based saliency map exhibiting aspects of both right-hemisphere dominance and opponent process models.

  16. Lack of generalization of auditory learning in typically developing children.

    PubMed

    Halliday, Lorna F; Taylor, Jenny L; Millward, Kerri E; Moore, David R

    2012-02-01

    To understand the components of auditory learning in typically developing children by assessing generalization across stimuli, across modalities (i.e., hearing, vision), and to higher level language tasks. Eighty-six 8- to 10-year-old typically developing children were quasi-randomly assigned to 4 groups. Three of the groups received twelve 30-min training sessions on multiple standards using either an auditory frequency discrimination task (AFD group), auditory phonetic discrimination task (PD group), or visual frequency discrimination task (VFD group) over 4 weeks. The 4th group, which was the no-intervention control (NI) group, did not receive any training. Thresholds on all tasks (AFD, PD, and VFD) were assessed immediately before and after training, along with performance on a battery of language assessments. Relative to the other groups, both the AFD group and the PD group, but not the VFD group, showed significant learning on the stimuli upon which they were trained. However, in those instances where learning was observed, it did not generalize to the nontrained stimuli or to the language assessments. Nonspeech (AFD) or speech (PD) discrimination training can lead to auditory learning in typically developing children of this age range. However, this learning does not always generalize across stimuli or tasks, across modalities, or to higher level measures of language ability.

  17. Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities

    ERIC Educational Resources Information Center

    Norrix, Linda W.; Plante, Elena; Vance, Rebecca

    2006-01-01

    Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…

  18. Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities

    ERIC Educational Resources Information Center

    Norrix, Linda W.; Plante, Elena; Vance, Rebecca

    2006-01-01

    Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…

  19. Lack of Generalization of Auditory Learning in Typically Developing Children

    ERIC Educational Resources Information Center

    Halliday, Lorna F.; Taylor, Jenny L.; Millward, Kerri E.; Moore, David R.

    2012-01-01

    Purpose: To understand the components of auditory learning in typically developing children by assessing generalization across stimuli, across modalities (i.e., hearing, vision), and to higher level language tasks. Method: Eighty-six 8- to 10-year-old typically developing children were quasi-randomly assigned to 4 groups. Three of the groups…

  20. Auditory Feedback and Writing: Learning Disabled and Nondisabled Students.

    ERIC Educational Resources Information Center

    Espin, Christine A.; Sindelar, Paul T.

    1988-01-01

    Ninety students in grades six to eight listened to or read written passages and then identified and corrected grammar and syntax errors. Students listening and receiving auditory feedback located more errors than those reading. Learning-disabled students and students matched on reading level identified fewer errors than did students matched on…

  1. Task-dependent activations of human auditory cortex during spatial discrimination and spatial memory tasks.

    PubMed

    Rinne, Teemu; Koistinen, Sonja; Talja, Suvi; Wikman, Patrik; Salonen, Oili

    2012-02-15

    In the present study, we applied high-resolution functional magnetic resonance imaging (fMRI) of the human auditory cortex (AC) and adjacent areas to compare activations during spatial discrimination and spatial n-back memory tasks that were varied parametrically in difficulty. We found that activations in the anterior superior temporal gyrus (STG) were stronger during spatial discrimination than during spatial memory, while spatial memory was associated with stronger activations in the inferior parietal lobule (IPL). We also found that wide AC areas were strongly deactivated during the spatial memory tasks. The present AC activation patterns associated with spatial discrimination and spatial memory tasks were highly similar to those obtained in our previous study comparing AC activations during pitch discrimination and pitch memory (Rinne et al., 2009). Together our previous and present results indicate that discrimination and memory tasks activate anterior and posterior AC areas differently and that this anterior-posterior division is present both when these tasks are performed on spatially invariant (pitch discrimination vs. memory) or spatially varying (spatial discrimination vs. memory) sounds. These results also further strengthen the view that activations of human AC cannot be explained only by stimulus-level parameters (e.g., spatial vs. nonspatial stimuli) but that the activations observed with fMRI are strongly dependent on the characteristics of the behavioral task. Thus, our results suggest that in order to understand the functional structure of AC a more systematic investigation of task-related factors affecting AC activations is needed. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Influence of syllable structure on L2 auditory word learning.

    PubMed

    Hamada, Megumi; Goya, Hideki

    2015-04-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a closed-syllable structure and consonant clusters. Two groups of college students (Japanese group, N = 22; and native speakers of English, N = 21) learned paired English pseudowords and pictures. The pseudoword types differed in terms of the syllable structure and consonant clusters (congruent vs. incongruent) and the position of consonant clusters (coda vs. onset). Recall accuracy was higher for the pseudowords in the congruent type and the pseudowords with the coda-consonant clusters. The syllable structure effect was obtained from both participant groups, disconfirming the hypothesized cross-linguistic influence on L2 auditory word learning.

  3. The Time Course of Neural Changes Underlying Auditory Perceptual Learning

    PubMed Central

    Atienza, Mercedes; Cantero, Jose L.; Dominguez-Marin, Elena

    2002-01-01

    Improvement in perception takes place within the training session and from one session to the next. The present study aims at determining the time course of perceptual learning as revealed by changes in auditory event-related potentials (ERPs) reflecting preattentive processes. Subjects were trained to discriminate two complex auditory patterns in a single session. ERPs were recorded just before and after training, while subjects read a book and ignored stimulation. ERPs showed a negative wave called mismatch negativity (MMN)—which indexes automatic detection of a change in a homogeneous auditory sequence—just after subjects learned to consciously discriminate the two patterns. ERPs were recorded again 12, 24, 36, and 48 h later, just before testing performance on the discrimination task. Additional behavioral and neurophysiological changes were found several hours after the training session: an enhanced P2 at 24 h followed by shorter reaction times, and an enhanced MMN at 36 h. These results indicate that gains in performance on the discrimination of two complex auditory patterns are accompanied by different learning-dependent neurophysiological events evolving within different time frames, supporting the hypothesis that fast and slow neural changes underlie the acquisition of improved perception. PMID:12075002

  4. Effect of task-related continuous auditory feedback during learning of tracking motion exercises.

    PubMed

    Rosati, Giulio; Oscari, Fabio; Spagnol, Simone; Avanzini, Federico; Masiero, Stefano

    2012-10-10

    This paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning. We implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller) with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel) yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video), to the audio channel, in order to investigate which information was more relevant to the user. Task-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel visuomotor perturbation, whereas controller-task-related sound

  5. Efficacy of the LiSN & Learn auditory training software: randomized blinded controlled study

    PubMed Central

    Cameron, Sharon; Glyde, Helen; Dillon, Harvey

    2012-01-01

    Children with a spatial processing disorder (SPD) require a more favorable signal-to-noise ratio in the classroom because they have difficulty perceiving sound source location cues. Previous research has shown that a novel training program - LiSN & Learn - employing spatialized sound, overcomes this deficit. Here we investigate whether improvements in spatial processing ability are specific to the LiSN & Learn training program. Participants were ten children (aged between 6;0 [years;months] and 9;9) with normal peripheral hearing who were diagnosed as having SPD using the Listening in Spatialized Noise - Sentences test (LiSN-S). In a blinded controlled study, the participants were randomly allocated to train with either the LiSN & Learn or another auditory training program - Earobics - for approximately 15 min per day for twelve weeks. There was a significant improvement post-training on the conditions of the LiSN-S that evaluate spatial processing ability for the LiSN & Learn group (P=0.03 to 0.0008, η2=0.75 to 0.95, n=5), but not for the Earobics group (P=0.5 to 0.7, η2=0.1 to 0.04, n=5). Results from questionnaires completed by the participants and their parents and teachers revealed improvements in real-world listening performance post-training were greater in the LiSN & Learn group than the Earobics group. LiSN & Learn training improved binaural processing ability in children with SPD, enhancing their ability to understand speech in noise. Exposure to non-spatialized auditory training does not produce similar outcomes, emphasizing the importance of deficit-specific remediation. PMID:26557330

  6. Reduction of internal noise in auditory perceptual learning.

    PubMed

    Jones, Pete R; Moore, David R; Amitay, Sygal; Shub, Daniel E

    2013-02-01

    This paper examines what mechanisms underlie auditory perceptual learning. Fifteen normal hearing adults performed two-alternative, forced choice, pure tone frequency discrimination for four sessions. External variability was introduced by adding a zero-mean Gaussian random variable to the frequency of each tone. Measures of internal noise, encoding efficiency, bias, and inattentiveness were derived using four methods (model fit, classification boundary, psychometric function, and double-pass consistency). The four methods gave convergent estimates of internal noise, which was found to decrease from 4.52 to 2.93 Hz with practice. No group-mean changes in encoding efficiency, bias, or inattentiveness were observed. It is concluded that learned improvements in frequency discrimination primarily reflect a reduction in internal noise. Data from highly experienced listeners and neural networks performing the same task are also reported. These results also indicated that auditory learning represents internal noise reduction, potentially through the re-weighting of frequency-specific channels.

  7. Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement.

    PubMed

    Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria

    2016-03-01

    Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer.

  8. Spatial versus object feature processing in human auditory cortex: a magnetoencephalographic study.

    PubMed

    Herrmann, Christoph S; Senkowski, Daniel; Maess, Burkhard; Friederici, Angela D

    2002-12-06

    The human visual system is divided into two pathways specialized for the processing of either objects or spatial locations. Neuroanatomical studies in monkeys have suggested that a similar specialization may also divide auditory cortex into two such pathways. We used the identical stimulus material in two experimental sessions in which subjects had to either identify auditory objects or their location. Magnetoencephalograms were recorded and M100 dipoles were fitted into individual brain models. In the right hemisphere, the processing of auditory spatial information lead to more lateral activations within the temporal plane while object identification lead to more medial activations. These findings suggest that the human auditory system processes object features and spatial features in distinct areas.

  9. Spatial deployment of attention within and across hemifields in an auditory task.

    PubMed

    Rorden, C; Driver, J

    2001-04-01

    Research on visual attention has demonstrated that covert attention can be focused on particular locations within one hemifield, but that a specific "meridian" cost may also be found for shifting attention between hemifields. These issues have received less consideration for audition, even though reliable behavioral measures for the effects of spatial attention on hearing are now available. We examined the spatial distribution of covert attention in an auditory task following spatially non-predictive peripheral auditory cues (which should induce exogenous attention shifts), or following symbolic central cues that predicted the likely location for the auditory target (to induce endogenous attention shifts). In both cases, we found that attention can be focused not only on one hemifield versus another, but also within one hemifield in an auditory task. However, there was no unequivocal evidence for a meridian effect in audition.

  10. Spatial Language Learning

    ERIC Educational Resources Information Center

    Fu, Zhengling

    2016-01-01

    Spatial language constitutes part of the basic fabric of language. Although languages may have the same number of terms to cover a set of spatial relations, they do not always do so in the same way. Spatial languages differ across languages quite radically, thus providing a real semantic challenge for second language learners. The essay first…

  11. Generalization lags behind learning on an auditory perceptual task.

    PubMed

    Wright, Beverly A; Wilson, Roselyn M; Sabin, Andrew T

    2010-09-01

    The generalization of learning from trained to untrained conditions is of great potential value because it markedly increases the efficacy of practice. In principle, generalization and the learning itself could arise from either the same or distinct neural changes. Here, we assessed these two possibilities in the realm of human perceptual learning by comparing the time course of improvement on a trained condition (learning) to that on an untrained condition (generalization) for an auditory temporal-interval discrimination task. While significant improvement on the trained condition occurred within 2 d, generalization to the untrained condition lagged behind, only emerging after 4 d. The different time courses for learning and generalization suggest that these two types of perceptual improvement can arise from at least partially distinct neural changes. The notably longer time course for generalization than learning demonstrates that increasing the duration of training can be an effective means to increase the number of conditions to which learning generalizes on perceptual tasks.

  12. Selective Increase of Auditory Cortico-Striatal Coherence during Auditory-Cued Go/NoGo Discrimination Learning

    PubMed Central

    Schulz, Andreas L.; Woldeit, Marie L.; Gonçalves, Ana I.; Saldeitis, Katja; Ohl, Frank W.

    2016-01-01

    Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcement models, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functional coupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed. PMID:26793085

  13. Feedback delays eliminate auditory-motor learning in speech production.

    PubMed

    Max, Ludo; Maffett, Derek G

    2015-03-30

    Neurologically healthy individuals use sensory feedback to alter future movements by updating internal models of the effector system and environment. For example, when visual feedback about limb movements or auditory feedback about speech movements is experimentally perturbed, the planning of subsequent movements is adjusted - i.e., sensorimotor adaptation occurs. A separate line of studies has demonstrated that experimentally delaying the sensory consequences of limb movements causes the sensory input to be attributed to external sources rather than to one's own actions. Yet similar feedback delays have remarkably little effect on visuo-motor adaptation (although the rate of learning varies, the amount of adaptation is only moderately affected with delays of 100-200ms, and adaptation still occurs even with a delay as long as 5000ms). Thus, limb motor learning remains largely intact even in conditions where error assignment favors external factors. Here, we show a fundamentally different result for sensorimotor control of speech articulation: auditory-motor adaptation to formant-shifted feedback is completely eliminated with delays of 100ms or more. Thus, for speech motor learning, real-time auditory feedback is critical. This novel finding informs theoretical models of human motor control in general and speech motor control in particular, and it has direct implications for the application of motor learning principles in the habilitation and rehabilitation of individuals with various sensorimotor speech disorders. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Rapid auditory learning of temporal gap detection.

    PubMed

    Mishra, Srikanta K; Panda, Manasa R

    2016-07-01

    The rapid initial phase of training-induced improvement has been shown to reflect a genuine sensory change in perception. Several features of early and rapid learning, such as generalization and stability, remain to be characterized. The present study demonstrated that learning effects from brief training on a temporal gap detection task using spectrally similar narrowband noise markers defining the gap (within-channel task), transfer across ears, however, not across spectrally dissimilar markers (between-channel task). The learning effects associated with brief training on a gap detection task were found to be stable for at least a day. These initial findings have significant implications for characterizing early and rapid learning effects.

  15. Analysis of Mean Learning of Normal Participants on the Rey Auditory-Verbal Learning Test

    ERIC Educational Resources Information Center

    Poreh, Amir

    2005-01-01

    Analysis of the mean performance of 58 groups of normal adults and children on the free-recall trials of the Rey Auditory-Verbal Learning Test shows that the mean auditory-verbal learning of each group is described by the function R1+Sln(t), where R1 is a measure of the mean immediate memory span, S is the slope of the mean logarithmic learning…

  16. Motivation and Intelligence Drive Auditory Perceptual Learning

    PubMed Central

    Amitay, Sygal; Halliday, Lorna; Taylor, Jenny; Sohoglu, Ediz; Moore, David R.

    2010-01-01

    Background Although feedback on performance is generally thought to promote perceptual learning, the role and necessity of feedback remain unclear. We investigated the effect of providing varying amounts of positive feedback while listeners attempted to discriminate between three identical tones on learning frequency discrimination. Methodology/Principal Findings Using this novel procedure, the feedback was meaningless and random in relation to the listeners' responses, but the amount of feedback provided (or lack thereof) affected learning. We found that a group of listeners who received positive feedback on 10% of the trials improved their performance on the task (learned), while other groups provided either with excess (90%) or with no feedback did not learn. Superimposed on these group data, however, individual listeners showed other systematic changes of performance. In particular, those with lower non-verbal IQ who trained in the no feedback condition performed more poorly after training. Conclusions/Significance This pattern of results cannot be accounted for by learning models that ascribe an external teacher role to feedback. We suggest, instead, that feedback is used to monitor performance on the task in relation to its perceived difficulty, and that listeners who learn without the benefit of feedback are adept at self-monitoring of performance, a trait that also supports better performance on non-verbal IQ tests. These results show that ‘perceptual’ learning is strongly influenced by top-down processes of motivation and intelligence. PMID:20352121

  17. Motivation and intelligence drive auditory perceptual learning.

    PubMed

    Amitay, Sygal; Halliday, Lorna; Taylor, Jenny; Sohoglu, Ediz; Moore, David R

    2010-03-23

    Although feedback on performance is generally thought to promote perceptual learning, the role and necessity of feedback remain unclear. We investigated the effect of providing varying amounts of positive feedback while listeners attempted to discriminate between three identical tones on learning frequency discrimination. Using this novel procedure, the feedback was meaningless and random in relation to the listeners' responses, but the amount of feedback provided (or lack thereof) affected learning. We found that a group of listeners who received positive feedback on 10% of the trials improved their performance on the task (learned), while other groups provided either with excess (90%) or with no feedback did not learn. Superimposed on these group data, however, individual listeners showed other systematic changes of performance. In particular, those with lower non-verbal IQ who trained in the no feedback condition performed more poorly after training. This pattern of results cannot be accounted for by learning models that ascribe an external teacher role to feedback. We suggest, instead, that feedback is used to monitor performance on the task in relation to its perceived difficulty, and that listeners who learn without the benefit of feedback are adept at self-monitoring of performance, a trait that also supports better performance on non-verbal IQ tests. These results show that 'perceptual' learning is strongly influenced by top-down processes of motivation and intelligence.

  18. Motor-related signals in the auditory system for listening and learning.

    PubMed

    Schneider, David M; Mooney, Richard

    2015-08-01

    In the auditory system, corollary discharge signals are theorized to facilitate normal hearing and the learning of acoustic behaviors, including speech and music. Despite clear evidence of corollary discharge signals in the auditory cortex and their presumed importance for hearing and auditory-guided motor learning, the circuitry and function of corollary discharge signals in the auditory cortex are not well described. In this review, we focus on recent developments in the mouse and songbird that provide insights into the circuitry that transmits corollary discharge signals to the auditory system and the function of these signals in the context of hearing and vocal learning.

  19. Auditory spatial representations of the world are compressed in blind humans.

    PubMed

    Kolarik, Andrew J; Pardhan, Shahina; Cirstea, Silvia; Moore, Brian C J

    2017-02-01

    Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.

  20. Auditory middle latency response in children with learning difficulties

    PubMed Central

    Frizzo, Ana Claudia Figueiredo; Issac, Myriam Lima; Pontes-Fernandes, Angela Cristina; Menezes, Pedro de Lemos; Funayama, Carolina Araújo Rodrigues

    2012-01-01

    Summary Introduction: This is an objective laboratory assessment of the central auditory systems of children with learning disabilities. Aim: To examine and determine the properties of the components of the Auditory Middle Latency Response in a sample of children with learning disabilities. Methods: This was a prospective, cross-sectional cohort study with quantitative, descriptive, and exploratory outcomes. We included 50 children aged 8–13 years of both genders with and without learning disorders. Those with disorders of known organic, environmental, or genetic causes were excluded. Results and Conclusions: The Na, Pa, and Nb waves were identified in all subjects. The ranges of the latency component values were as follows: Na = 9.8–32.3 ms, Pa = 19.0–51.4 ms, Nb = 30.0–64.3 ms (learning disorders group) and Na = 13.2–29.6 ms, Pa = 21.8–42.8 ms, Nb = 28.4–65.8 ms (healthy group). The values of the Na-Pa amplitude ranged from 0.3 to 6.8 ìV (learning disorders group) or 0.2–3.6 ìV (learning disorders group). Upon analysis, the functional characteristics of the groups were distinct: the left hemisphere Nb latency was longer in the study group than in the control group. Peculiarities of the electrophysiological measures were observed in the children with learning disorders. This study has provided information on the Auditory Middle Latency Response and can serve as a reference for other clinical and experimental studies in children with these disorders. PMID:25991954

  1. Auditory working memory predicts individual differences in absolute pitch learning.

    PubMed

    Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C

    2015-07-01

    Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the

  2. Auditory middle latency response in children with learning difficulties.

    PubMed

    Frizzo, Ana Claudia Figueiredo; Issac, Myriam Lima; Pontes-Fernandes, Angela Cristina; Menezes, Pedro de Lemos; Funayama, Carolina Araújo Rodrigues

    2012-07-01

     This is an objective laboratory assessment of the central auditory systems of children with learning disabilities.  To examine and determine the properties of the components of the Auditory Middle Latency Response in a sample of children with learning disabilities.  This was a prospective, cross-sectional cohort study with quantitative, descriptive, and exploratory outcomes. We included 50 children aged 8-13 years of both genders with and without learning disorders. Those with disorders of known organic, environmental, or genetic causes were excluded.  The Na, Pa, and Nb waves were identified in all subjects. The ranges of the latency component values were as follows: Na = 9.8-32.3 ms, Pa = 19.0-51.4 ms, Nb = 30.0-64.3 ms (learning disorders group) and Na = 13.2-29.6 ms, Pa = 21.8-42.8 ms, Nb = 28.4-65.8 ms (healthy group). The values of the Na-Pa amplitude ranged from 0.3 to 6.8 ìV (learning disorders group) or 0.2-3.6 ìV (learning disorders group). Upon analysis, the functional characteristics of the groups were distinct: the left hemisphere Nb latency was longer in the study group than in the control group. Peculiarities of the electrophysiological measures were observed in the children with learning disorders. This study has provided information on the Auditory Middle Latency Response and can serve as a reference for other clinical and experimental studies in children with these disorders.

  3. Learning Anatomy Enhances Spatial Ability

    ERIC Educational Resources Information Center

    Vorstenbosch, Marc A. T. M.; Klaassen, Tim P. F. M.; Donders, A. R. T.; Kooloos, Jan G. M.; Bolhuis, Sanneke M.; Laan, Roland F. J. M.

    2013-01-01

    Spatial ability is an important factor in learning anatomy. Students with high scores on a mental rotation test (MRT) systematically score higher on anatomy examinations. This study aims to investigate if learning anatomy also oppositely improves the MRT-score. Five hundred first year students of medicine ("n" = 242, intervention) and…

  4. Learning Anatomy Enhances Spatial Ability

    ERIC Educational Resources Information Center

    Vorstenbosch, Marc A. T. M.; Klaassen, Tim P. F. M.; Donders, A. R. T.; Kooloos, Jan G. M.; Bolhuis, Sanneke M.; Laan, Roland F. J. M.

    2013-01-01

    Spatial ability is an important factor in learning anatomy. Students with high scores on a mental rotation test (MRT) systematically score higher on anatomy examinations. This study aims to investigate if learning anatomy also oppositely improves the MRT-score. Five hundred first year students of medicine ("n" = 242, intervention) and…

  5. Spatial representations of temporal and spectral sound cues in human auditory cortex.

    PubMed

    Herdener, Marcus; Esposito, Fabrizio; Scheffler, Klaus; Schneider, Peter; Logothetis, Nikos K; Uludag, Kamil; Kayser, Christoph

    2013-01-01

    Natural and behaviorally relevant sounds are characterized by temporal modulations of their waveforms, which carry important cues for sound segmentation and communication. Still, there is little consensus as to how this temporal information is represented in auditory cortex. Here, by using functional magnetic resonance imaging (fMRI) optimized for studying the auditory system, we report the existence of a topographically ordered spatial representation of temporal sound modulation rates in human auditory cortex. We found a topographically organized sensitivity within auditory cortex to sounds with varying modulation rates, with enhanced responses to lower modulation rates (2 and 4 Hz) on lateral parts of Heschl's gyrus (HG) and faster modulation rates (16 and 32 Hz) on medial HG. The representation of temporal modulation rates was distinct from the representation of sound frequencies (tonotopy) that was orientated roughly orthogonal. Moreover, the combination of probabilistic anatomical maps with a previously proposed functional delineation of auditory fields revealed that the distinct maps of temporal and spectral sound features both prevail within two presumed primary auditory fields hA1 and hR. Our results reveal a topographically ordered representation of temporal sound cues in human primary auditory cortex that is complementary to maps of spectral cues. They thereby enhance our understanding of the functional parcellation and organization of auditory cortical processing.

  6. Emergence of learned categorical representations within an auditory forebrain circuit

    PubMed Central

    Jeanne, James M.; Thompson, Jason V.; Sharpee, Tatyana O.; Gentner, Timothy Q.

    2011-01-01

    Many learned behaviors are thought to require the activity of high-level neurons that represent categories of complex signals, such as familiar faces or native speech sounds. How these complex, experience-dependent neural responses emerge within the brain’s circuitry is not well understood. The caudomedial mesopallium (CMM), a secondary auditory region in the songbird brain, contains neurons that respond to specific combinations of song components and respond preferentially to the songs that birds have learned to recognize. Here, we examine the transformation of these learned responses across a broader forebrain circuit that includes the caudolateral mesopallium (CLM), an auditory region that provides input to CMM. We recorded extracellular single-unit activity in CLM and CMM in European starlings trained to recognize sets of conspecific songs and compared multiple encoding properties of neurons between these regions. We find that the responses of CMM neurons are more selective between song components, convey more information about song components, and are more variable over repeated components than the responses of CLM neurons. While learning enhances neural encoding of song components in both regions, CMM neurons encode more information about the learned categories associated with songs than CLM neurons. Collectively, these data suggest that CLM and CMM are part of a functional sensory hierarchy that is modified by learning to yield representations of natural vocal signals that are increasingly informative with respect to behavior. PMID:21325527

  7. Interactive coding of visual spatial frequency and auditory amplitude-modulation rate.

    PubMed

    Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Mossbridge, Julia; Suzuki, Satoru

    2012-03-06

    Spatial frequency is a fundamental visual feature coded in primary visual cortex, relevant for perceiving textures, objects, hierarchical structures, and scenes, as well as for directing attention and eye movements. Temporal amplitude-modulation (AM) rate is a fundamental auditory feature coded in primary auditory cortex, relevant for perceiving auditory objects, scenes, and speech. Spatial frequency and temporal AM rate are thus fundamental building blocks of visual and auditory perception. Recent results suggest that crossmodal interactions are commonplace across the primary sensory cortices and that some of the underlying neural associations develop through consistent multisensory experience such as audio-visually perceiving speech, gender, and objects. We demonstrate that people consistently and absolutely (rather than relatively) match specific auditory AM rates to specific visual spatial frequencies. We further demonstrate that this crossmodal mapping allows amplitude-modulated sounds to guide attention to and modulate awareness of specific visual spatial frequencies. Additional results show that the crossmodal association is approximately linear, based on physical spatial frequency, and generalizes to tactile pulses, suggesting that the association develops through multisensory experience during manual exploration of surfaces. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Pip and Pop: Nonspatial Auditory Signals Improve Spatial Visual Search

    ERIC Educational Resources Information Center

    Van der Burg, Erik; Olivers, Christian N. L.; Bronkhorst, Adelbert W.; Theeuwes, Jan

    2008-01-01

    Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location…

  9. Spatial Attention Evokes Similar Activation Patterns for Visual and Auditory Stimuli

    PubMed Central

    Smith, David V.; Davis, Ben; Niu, Kathy; Healy, Eric W.; Bonilha, Leonardo; Fridriksson, Julius; Morgan, Paul S.; Rorden, Chris

    2010-01-01

    Neuroimaging studies suggest that a fronto-parietal network is activated when we expect visual information to appear at a specific spatial location. Here we examined whether a similar network is involved for auditory stimuli. We used sparse fMRI to infer brain activation while participants performed analogous visual and auditory tasks. On some trials, participants were asked to discriminate the elevation of a peripheral target. On other trials, participants made a nonspatial judgment. We contrasted trials where the participants expected a peripheral spatial target to those where they were cued to expect a central target. Crucially, our statistical analyses were based on trials where stimuli were anticipated but not presented, allowing us to directly infer perceptual orienting independent of perceptual processing. This is the first neuroimaging study to use an orthogonal-cuing paradigm (with cues predicting azimuth and responses involving elevation discrimination). This aspect of our paradigm is important, as behavioral cueing effects in audition are classically only observed when participants are asked to make spatial judgments. We observed similar fronto-parietal activation for both vision and audition. In a second experiment that controlled for stimulus properties and task difficulty, participants made spatial and temporal discriminations about musical instruments. We found that the pattern of brain activation for spatial selection of auditory stimuli was remarkably similar to what we found in our first experiment. Collectively, these results suggest that the neural mechanisms supporting spatial attention are largely similar across both visual and auditory modalities. PMID:19400684

  10. Integration of Auditory and Visual Spatial Information During Early Infancy.

    ERIC Educational Resources Information Center

    Lyons-Ruth, Karlen

    An experiment was performed to show that infants perceive auditory and visual stimuli within a common space and that they perceive the sound as an attribute of the visual object. Subjects were 22 infants aged 3 to 5 months. Each infant was presented with a toy that moved in a small arc from side to side of a small window at the rate of one arc per…

  11. Assessment of auditory spatial awareness in complex listening environments.

    PubMed

    Brungart, Douglas S; Cohen, Julie; Cord, Mary; Zion, Danielle; Kalluri, Sridhar

    2014-10-01

    In the real world, listeners often need to track multiple simultaneous sources in order to maintain awareness of the relevant sounds in their environments. Thus, there is reason to believe that simple single source sound localization tasks may not accurately capture the impact that a listening device such as a hearing aid might have on a listener's level of auditory awareness. In this experiment, 10 normal hearing listeners and 20 hearing impaired listeners were tested in a task that required them to identify and localize sound sources in three different listening tasks of increasing complexity: a single-source localization task, where listeners identified and localized a single sound source presented in isolation; an added source task, where listeners identified and localized a source that was added to an existing auditory scene, and a remove source task, where listeners identified and localized a source that was removed from an existing auditory scene. Hearing impaired listeners completed these tasks with and without the use of their previously fit hearing aids. As expected, the results show that performance decreased both with increasing task complexity and with the number of competing sound sources in the acoustic scene. The results also show that the added source task was as sensitive to differences in performance across listening conditions as the standard localization task, but that it correlated with a different pattern of subjective and objective performance measures across listeners. This result suggests that a measure of complex auditory situation awareness such as the one tested here may be a useful tool for evaluating differences in performance across different types of listening devices, such as hearing aids or hearing protection devices.

  12. Foxp2 mutations impair auditory-motor association learning.

    PubMed

    Kurt, Simone; Fisher, Simon E; Ehret, Günter

    2012-01-01

    Heterozygous mutations of the human FOXP2 transcription factor gene cause the best-described examples of monogenic speech and language disorders. Acquisition of proficient spoken language involves auditory-guided vocal learning, a specialized form of sensory-motor association learning. The impact of etiological Foxp2 mutations on learning of auditory-motor associations in mammals has not been determined yet. Here, we directly assess this type of learning using a newly developed conditioned avoidance paradigm in a shuttle-box for mice. We show striking deficits in mice heterozygous for either of two different Foxp2 mutations previously implicated in human speech disorders. Both mutations cause delays in acquiring new motor skills. The magnitude of impairments in association learning, however, depends on the nature of the mutation. Mice with a missense mutation in the DNA-binding domain are able to learn, but at a much slower rate than wild type animals, while mice carrying an early nonsense mutation learn very little. These results are consistent with expression of Foxp2 in distributed circuits of the cortex, striatum and cerebellum that are known to play key roles in acquisition of motor skills and sensory-motor association learning, and suggest differing in vivo effects for distinct variants of the Foxp2 protein. Given the importance of such networks for the acquisition of human spoken language, and the fact that similar mutations in human FOXP2 cause problems with speech development, this work opens up a new perspective on the use of mouse models for understanding pathways underlying speech and language disorders.

  13. Different Verbal Learning Strategies in Autism Spectrum Disorder: Evidence from the Rey Auditory Verbal Learning Test

    ERIC Educational Resources Information Center

    Bowler, Dermot M.; Limoges, Elyse; Mottron, Laurent

    2009-01-01

    The Rey Auditory Verbal Learning Test, which requires the free recall of the same list of 15 unrelated words over 5 trials, was administered to 21 high-functioning adolescents and adults with autism spectrum disorder (ASD) and 21 matched typical individuals. The groups showed similar overall levels of free recall, rates of learning over trials and…

  14. Visual and Auditory Learning Processes in Normal Children and Children with Specific Learning Disabilities. Final Report.

    ERIC Educational Resources Information Center

    McGrady, Harold J.; Olson, Don A.

    To describe and compare the psychosensory functioning of normal children and children with specific learning disabilities, 62 learning disabled and 68 normal children were studied. Each child was given a battery of thirteen subtests on an automated psychosensory system representing various combinations of auditory and visual intra- and…

  15. [Development of auditory-visual spatial integration using saccadic response time as the index].

    PubMed

    Kato, Masaharu; Konishi, Kaoru; Kurosawa, Makiko; Konishi, Yukuo

    2006-05-01

    We measured saccadic response time (SRT) to investigate developmental changes related to spatially aligned or misaligned auditory and visual stimuli responses. We exposed 4-, 5-, and 11-month-old infants to ipsilateral or contralateral auditory-visual stimuli and monitored their eye movements using an electro-oculographic (EOG) system. The SRT analyses revealed four main results. First, saccades were triggered by visual stimuli but not always triggered by auditory stimuli. Second, SRTs became shorter as the children grew older. Third, SRTs for the ipsilateral and visual-only conditions were the same in all infants. Fourth, SRTs for the contralateral condition were longer than for the ipsilateral and visual-only conditions in 11-month-old infants but were the same for all three conditions in 4- and 5-month-old infants. These findings suggest that infants acquire the function of auditory-visual spatial integration underlying saccadic eye movement between the ages of 5 and 11 months. The dependency of SRTs on the spatial configuration of auditory and visual stimuli can be explained by cortical control of the superior colliculus. Our finding of no differences in SRTs between the ipsilateral and visual-only conditions suggests that there are multiple pathways for controlling the superior colliculus and that these pathways have different developmental time courses.

  16. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    NASA Astrophysics Data System (ADS)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present

  17. New perspectives on the auditory cortex: learning and memory.

    PubMed

    Weinberger, Norman M

    2015-01-01

    Primary ("early") sensory cortices have been viewed as stimulus analyzers devoid of function in learning, memory, and cognition. However, studies combining sensory neurophysiology and learning protocols have revealed that associative learning systematically modifies the encoding of stimulus dimensions in the primary auditory cortex (A1) to accentuate behaviorally important sounds. This "representational plasticity" (RP) is manifest at different levels. The sensitivity and selectivity of signal tones increase near threshold, tuning above threshold shifts toward the frequency of acoustic signals, and their area of representation can increase within the tonotopic map of A1. The magnitude of area gain encodes the level of behavioral stimulus importance and serves as a substrate of memory strength. RP has the same characteristics as behavioral memory: it is associative, specific, develops rapidly, consolidates, and can last indefinitely. Pairing tone with stimulation of the cholinergic nucleus basalis induces RP and implants specific behavioral memory, while directly increasing the representational area of a tone in A1 produces matching behavioral memory. Thus, RP satisfies key criteria for serving as a substrate of auditory memory. The findings suggest a basis for posttraumatic stress disorder in abnormally augmented cortical representations and emphasize the need for a new model of the cerebral cortex. © 2015 Elsevier B.V. All rights reserved.

  18. Auditory Processing, Plasticity, and Learning in the Barn Owl

    PubMed Central

    Peña, José L.; DeBello, William M.

    2011-01-01

    The human brain has accumulated many useful building blocks over its evolutionary history, and the best knowledge of these has often derived from experiments performed in animal species that display finely honed abilities. In this article we review a model system at the forefront of investigation into the neural bases of information processing, plasticity, and learning: the barn owl auditory localization pathway. In addition to the broadly applicable principles gleaned from three decades of work in this system, there are good reasons to believe that continued exploration of the owl brain will be invaluable for further advances in understanding of how neuronal networks give rise to behavior. PMID:21131711

  19. Auditory Discrimination as a Condition for E-Learning Based Speech Therapy: A Proposal for an Auditory Discrimination Test (ADT) for Adult Dysarthric Speakers

    ERIC Educational Resources Information Center

    Beijer, L. J.; Rietveld, A. C. M.; van Stiphout, A. J. L.

    2011-01-01

    Background: Web based speech training for dysarthric speakers, such as E-learning based Speech Therapy (EST), puts considerable demands on auditory discrimination abilities. Aims: To discuss the development and the evaluation of an auditory discrimination test (ADT) for the assessment of auditory speech discrimination skills in Dutch adult…

  20. Relative Contributions of Visual and Auditory Spatial Representations to Tactile Localization

    PubMed Central

    Noel, Jean-Paul; Wallace, Mark

    2016-01-01

    Spatial localization of touch is critically dependent upon coordinate transformation between different reference frames, which must ultimately allow for alignment between somatotopic and external representations of space. Although prior work has shown an important role for cues such as body posture in influencing the spatial localization of touch, the relative contributions of the different sensory systems to this process are unknown. In the current study, we had participants perform a tactile temporal order judgment (TOJ) under different body postures and conditions of sensory deprivation. Specifically, participants performed non-speeded judgments about the order of two tactile stimuli presented in rapid succession on their ankles during conditions in which their legs were either uncrossed or crossed (and thus bringing somatotopic and external reference frames into conflict). These judgments were made in the absence of 1) visual, 2) auditory, or 3) combined audio-visual spatial information by blindfolding and/or placing participants in an anechoic chamber. As expected, results revealed that tactile temporal acuity was poorer under crossed than uncrossed leg postures. Intriguingly, results also revealed that auditory and audio-visual deprivation exacerbated the difference in tactile temporal acuity between uncrossed to crossed leg postures, an effect not seen for visual-only deprivation. Furthermore, the effects under combined audio-visual deprivation were greater than those seen for auditory deprivation. Collectively, these results indicate that mechanisms governing the alignment between somatotopic and external reference frames extend beyond those imposed by body posture to include spatial features conveyed by the auditory and visual modalities – with a heavier weighting of auditory than visual spatial information. Thus, sensory modalities conveying exteroceptive spatial information contribute to judgments regarding the localization of touch. PMID:26768124

  1. The Use of Spatialized Speech in Auditory Interfaces for Computer Users Who Are Visually Impaired

    ERIC Educational Resources Information Center

    Sodnik, Jaka; Jakus, Grega; Tomazic, Saso

    2012-01-01

    Introduction: This article reports on a study that explored the benefits and drawbacks of using spatially positioned synthesized speech in auditory interfaces for computer users who are visually impaired (that is, are blind or have low vision). The study was a practical application of such systems--an enhanced word processing application compared…

  2. The Use of Spatialized Speech in Auditory Interfaces for Computer Users Who Are Visually Impaired

    ERIC Educational Resources Information Center

    Sodnik, Jaka; Jakus, Grega; Tomazic, Saso

    2012-01-01

    Introduction: This article reports on a study that explored the benefits and drawbacks of using spatially positioned synthesized speech in auditory interfaces for computer users who are visually impaired (that is, are blind or have low vision). The study was a practical application of such systems--an enhanced word processing application compared…

  3. Comparison of Auditory Language Comprehension Skills in Learning-Disabled and Academically Achieving Adolescents.

    ERIC Educational Resources Information Center

    Riedlinger-Ryan, Kathryn J.; Shewan, Cynthia M.

    1984-01-01

    Thirty academically achieving and 30 learning-disabled adolescents were examined on a battery of auditory language comprehension tests. Results indicated that 73 percent of the learning-disabled group scored lower than all of the control Ss on one or more of these tests. The importance of identifying auditory comprehension defects in the…

  4. Longitudinal Changes in Auditory Discrimination in Normal Children and Children with Language-Learning Problems.

    ERIC Educational Resources Information Center

    Elliott, Lois L.; Hammer, Michael A.

    1988-01-01

    Using a set of fine-grained auditory discrimination tasks, 21 children with language-learning problems were compared with 21 normal children, aged six-nine. Across three years, children with language-learning problems showed poorer auditory discrimination for temporally based acoustic differences, poorer receptive vocabulary and language…

  5. Binding of Verbal and Spatial Features in Auditory Working Memory

    ERIC Educational Resources Information Center

    Maybery, Murray T.; Clissa, Peter J.; Parmentier, Fabrice B. R.; Leung, Doris; Harsa, Grefin; Fox, Allison M.; Jones, Dylan M.

    2009-01-01

    The present study investigated the binding of verbal identity and spatial location in the retention of sequences of spatially distributed acoustic stimuli. Study stimuli varying in verbal content and spatial location (e.g. V[subscript 1]S[subscript 1], V[subscript 2]S[subscript 2], V[subscript 3]S[subscript 3], V[subscript 4]S[subscript 4]) were…

  6. Changes in Auditory Frequency Guide Visual-Spatial Attention

    ERIC Educational Resources Information Center

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2011-01-01

    How do the characteristics of sounds influence the allocation of visual-spatial attention? Natural sounds typically change in frequency. Here we demonstrate that the direction of frequency change guides visual-spatial attention more strongly than the average or ending frequency, and provide evidence suggesting that this cross-modal effect may be…

  7. Spatial and Temporal Relationships of Electrocorticographic Alpha and Gamma Activity During Auditory Processing

    PubMed Central

    Potes, Cristhian; Brunner, Peter; Gunduz, Aysegul; Knight, Robert T.; Schalk, Gerwin

    2014-01-01

    Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8-12 Hz) and high gamma (70-110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p < 1e–8). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these

  8. Does common spatial origin promote the auditory grouping of temporally separated signal elements in grey treefrogs?

    PubMed

    Bee, Mark A; Riemersma, Kasen K

    2008-09-01

    'Sequential integration' represents a form of auditory grouping in which temporally separated sounds produced by the same source are perceptually bound together over time into a coherent 'auditory stream'. In humans, sequential integration plays important roles in music and speech perception. In this study of the grey treefrog (Hyla chrysoscelis), we took advantage of female selectivity for advertisement calls with conspecific pulse rates to investigate common spatial location as a cue for sequential integration. We presented females with two temporally interleaved pulse sequences with pulse rates of 25 pulses/s, which is half the conspecific pulse rate and more similar to that of H. versicolor, a syntopically breeding heterospecific. We tested the hypothesis that common spatial origin between the two pulse sequences would promote their integration into a coherent auditory stream with an attractive conspecific pulse rate. As the spatial separation between the speakers broadcasting the interleaved pulse sequences decreased from 180° to 0°, more females responded and females exhibited shorter response latencies and travelled shorter distances en route to a speaker. However, even in the 180° condition, most females (74%) still responded. Detailed video analyses revealed no evidence to suggest that patterns of female phonotaxis resulted from impaired abilities to localize sound sources in the spatially separated conditions. Together, our results suggest that females were fairly permissive of spatial incoherence between the interleaved pulses sequences and that common spatial origin may be only a relatively weak cue for sequential integration in grey treefrogs.

  9. Does common spatial origin promote the auditory grouping of temporally separated signal elements in grey treefrogs?

    PubMed Central

    Bee, Mark A.; Riemersma, Kasen K.

    2008-01-01

    ‘Sequential integration’ represents a form of auditory grouping in which temporally separated sounds produced by the same source are perceptually bound together over time into a coherent ‘auditory stream’. In humans, sequential integration plays important roles in music and speech perception. In this study of the grey treefrog (Hyla chrysoscelis), we took advantage of female selectivity for advertisement calls with conspecific pulse rates to investigate common spatial location as a cue for sequential integration. We presented females with two temporally interleaved pulse sequences with pulse rates of 25 pulses/s, which is half the conspecific pulse rate and more similar to that of H. versicolor, a syntopically breeding heterospecific. We tested the hypothesis that common spatial origin between the two pulse sequences would promote their integration into a coherent auditory stream with an attractive conspecific pulse rate. As the spatial separation between the speakers broadcasting the interleaved pulse sequences decreased from 180° to 0°, more females responded and females exhibited shorter response latencies and travelled shorter distances en route to a speaker. However, even in the 180° condition, most females (74%) still responded. Detailed video analyses revealed no evidence to suggest that patterns of female phonotaxis resulted from impaired abilities to localize sound sources in the spatially separated conditions. Together, our results suggest that females were fairly permissive of spatial incoherence between the interleaved pulses sequences and that common spatial origin may be only a relatively weak cue for sequential integration in grey treefrogs. PMID:19727419

  10. Czech version of Rey Auditory Verbal Learning test: normative data.

    PubMed

    Bezdicek, Ondrej; Stepankova, Hana; Moták, Ladislav; Axelrod, Bradley N; Woodard, John L; Preiss, Marek; Nikolai, Tomáš; Růžička, Evžen; Poreh, Amir

    2014-01-01

    The present study provides normative data stratified by age for the Rey Auditory Verbal Learning test Czech version (RAVLT) derived from a sample of 306 cognitively normal subjects (20-85 years). Participants met strict inclusion criteria (absence of any active or past neurological or psychiatric disorder) and performed within normal limits on other neuropsychological measures. Our analyses revealed significant relationships between most RAVLT indices and age and education. Normative data are provided not only for basic RAVLT scores, but for the first time also for a variety of derived (gained/lost access, primacy/recency effect) and error scores. The study confirmed a logarithmic character of the learning slope and is consistent with other studies. It enables the clinician to evaluate more precisely subject's RAVLT memory performance on a vast number of indices and can be viewed as a concrete example of Quantified Process Approach to neuropsychological assessment.

  11. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.

    PubMed

    Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui

    2015-09-01

    Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness < 2 years) and 40 age- and gender-matched hearing controls underwent functional magnetic resonance imaging during a visuo-spatial delayed recognition task that consisted of encoding, maintenance and recognition stages. The early deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were

  12. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    PubMed

    Lehmann, Alexandre; Schönwiesner, Marc

    2014-01-01

    Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  13. Effects of spatially correlated acoustic-tactile information on judgments of auditory circular direction

    NASA Astrophysics Data System (ADS)

    Cohen, Annabel J.; Lamothe, M. J. Reina; Toms, Ian D.; Fleming, Richard A. G.

    2002-05-01

    Cohen, Lamothe, Fleming, MacIsaac, and Lamoureux [J. Acoust. Soc. Am. 109, 2460 (2001)] reported that proximity governed circular direction judgments (clockwise/counterclockwise) of two successive tones emanating from all pairs of 12 speakers located at 30-degree intervals around a listeners' head (cranium). Many listeners appeared to experience systematic front-back confusion. Diametrically opposed locations (180-degrees-theoretically ambiguous direction) produced a direction bias pattern resembling Deutsch's tritone paradox [Deutsch, Kuyper, and Fisher, Music Percept. 5, 7992 (1987)]. In Experiment 1 of the present study, the circular direction task was conducted in the tactile domain using 12 circumcranial points of vibration. For all 5 participants, proximity governed direction (without front-back confusion) and a simple clockwise bias was shown for 180-degree pairs. Experiment 2 tested 9 new participants in one unimodal auditory condition and two bimodal auditory-tactile conditions (spatially-correlated/spatially-uncorrelated). Correlated auditory-tactile information eliminated front-back confusion for 8 participants and replaced the ``paradoxical'' bias for 180-degree pairs with the clockwise bias. Thus, spatially correlated audio-tactile location information improves the veridical representation of 360-degree acoustic space, and modality-specific principles are implicated by the unique circular direction bias patterns for 180-degree pairs in the separate auditory and tactile modalities. [Work supported by NSERC.

  14. Differential Effects of Music and Video Gaming During Breaks on Auditory and Visual Learning.

    PubMed

    Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A

    2015-11-01

    The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning.

  15. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. An exploration of spatial auditory BCI paradigms with different sounds: music notes versus beeps.

    PubMed

    Huang, Minqiang; Daly, Ian; Jin, Jing; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2016-06-01

    Visual brain-computer interfaces (BCIs) are not suitable for people who cannot reliably maintain their eye gaze. Considering that this group usually maintains audition, an auditory based BCI may be a good choice for them. In this paper, we explore two auditory patterns: (1) a pattern utilizing symmetrical spatial cues with multiple frequency beeps [called the high low medium (HLM) pattern], and (2) a pattern utilizing non-symmetrical spatial cues with six tones derived from the diatonic scale [called the diatonic scale (DS) pattern]. These two patterns are compared to each other in terms of accuracy to determine which auditory pattern is better. The HLM pattern uses three different frequency beeps and has a symmetrical spatial distribution. The DS pattern uses six spoken stimuli, which are six notes solmizated as "do", "re", "mi", "fa", "sol" and "la", and derived from the diatonic scale. These six sounds are distributed to six, spatially distributed, speakers. Thus, we compare a BCI paradigm using beeps with another BCI paradigm using tones on the diatonic scale, when the stimuli are spatially distributed. Although no significant differences are found between the ERPs, the HLM pattern performs better than the DS pattern: the online accuracy achieved with the HLM pattern is significantly higher than that achieved with the DS pattern (p = 0.0028).

  17. Interdependent encoding of pitch, timbre, and spatial location in auditory cortex.

    PubMed

    Bizley, Jennifer K; Walker, Kerry M M; Silverman, Bernard W; King, Andrew J; Schnupp, Jan W H

    2009-02-18

    Because we can perceive the pitch, timbre, and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from nonspatial attributes. Indeed, recent studies support the existence of anatomically segregated "what" and "where" cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and nonspatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre, and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Although indicating that neural encoding of pitch, location, and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and nonspatial cues at higher cortical levels. Some units exhibited significant nonlinear interactions between particular combinations of pitch, timbre, and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and nonspatial attributes. Such nonlinearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects.

  18. Optimal neural population coding of an auditory spatial cue.

    PubMed

    Harper, Nicol S; McAlpine, David

    2004-08-05

    A sound, depending on the position of its source, can take more time to reach one ear than the other. This interaural (between the ears) time difference (ITD) provides a major cue for determining the source location. Many auditory neurons are sensitive to ITDs, but the means by which such neurons represent ITD is a contentious issue. Recent studies question whether the classical general model (the Jeffress model) applies across species. Here we show that ITD coding strategies of different species can be explained by a unifying principle: that the ITDs an animal naturally encounters should be coded with maximal accuracy. Using statistical techniques and a stochastic neural model, we demonstrate that the optimal coding strategy for ITD depends critically on head size and sound frequency. For small head sizes and/or low-frequency sounds, the optimal coding strategy tends towards two distinct sub-populations tuned to ITDs outside the range created by the head. This is consistent with recent observations in small mammals. For large head sizes and/or high frequencies, the optimal strategy is a homogeneous distribution of ITD tunings within the range created by the head. This is consistent with observations in the barn owl. For humans, the optimal strategy to code ITDs from an acoustically measured distribution depends on frequency; above 400 Hz a homogeneous distribution is optimal, and below 400 Hz distinct sub-populations are optimal.

  19. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    PubMed

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies.

  20. The effect of contextual auditory stimuli on virtual spatial navigation in patients with focal hemispheric lesions.

    PubMed

    Cogné, Mélanie; Knebel, Jean-François; Klinger, Evelyne; Bindschaedler, Claire; Rapin, Pierre-André; Joseph, Pierre-Alain; Clarke, Stephanie

    2016-01-06

    Topographical disorientation is a frequent deficit among patients suffering from brain injury. Spatial navigation can be explored in this population using virtual reality environments, even in the presence of motor or sensory disorders. Furthermore, the positive or negative impact of specific stimuli can be investigated. We studied how auditory stimuli influence the performance of brain-injured patients in a navigational task, using the Virtual Action Planning-Supermarket (VAP-S) with the addition of contextual ("sonar effect" and "name of product") and non-contextual ("periodic randomised noises") auditory stimuli. The study included 22 patients with a first unilateral hemispheric brain lesion and 17 healthy age-matched control subjects. After a software familiarisation, all subjects were tested without auditory stimuli, with a sonar effect or periodic random sounds in a random order, and with the stimulus "name of product". Contextual auditory stimuli improved patient performance more than control group performance. Contextual stimuli benefited most patients with severe executive dysfunction or with severe unilateral neglect. These results indicate that contextual auditory stimuli are useful in the assessment of navigational abilities in brain-damaged patients and that they should be used in rehabilitation paradigms.

  1. Musicians' Online Performance during Auditory and Visual Statistical Learning Tasks.

    PubMed

    Mandikal Vasuki, Pragati R; Sharma, Mridula; Ibrahim, Ronny K; Arciuli, Joanne

    2017-01-01

    Musicians' brains are considered to be a functional model of neuroplasticity due to the structural and functional changes associated with long-term musical training. In this study, we examined implicit extraction of statistical regularities from a continuous stream of stimuli-statistical learning (SL). We investigated whether long-term musical training is associated with better extraction of statistical cues in an auditory SL (aSL) task and a visual SL (vSL) task-both using the embedded triplet paradigm. Online measures, characterized by event related potentials (ERPs), were recorded during a familiarization phase while participants were exposed to a continuous stream of individually presented pure tones in the aSL task or individually presented cartoon figures in the vSL task. Unbeknown to participants, the stream was composed of triplets. Musicians showed advantages when compared to non-musicians in the online measure (early N1 and N400 triplet onset effects) during the aSL task. However, there were no differences between musicians and non-musicians for the vSL task. Results from the current study show that musical training is associated with enhancements in extraction of statistical cues only in the auditory domain.

  2. Musicians’ Online Performance during Auditory and Visual Statistical Learning Tasks

    PubMed Central

    Mandikal Vasuki, Pragati R.; Sharma, Mridula; Ibrahim, Ronny K.; Arciuli, Joanne

    2017-01-01

    Musicians’ brains are considered to be a functional model of neuroplasticity due to the structural and functional changes associated with long-term musical training. In this study, we examined implicit extraction of statistical regularities from a continuous stream of stimuli—statistical learning (SL). We investigated whether long-term musical training is associated with better extraction of statistical cues in an auditory SL (aSL) task and a visual SL (vSL) task—both using the embedded triplet paradigm. Online measures, characterized by event related potentials (ERPs), were recorded during a familiarization phase while participants were exposed to a continuous stream of individually presented pure tones in the aSL task or individually presented cartoon figures in the vSL task. Unbeknown to participants, the stream was composed of triplets. Musicians showed advantages when compared to non-musicians in the online measure (early N1 and N400 triplet onset effects) during the aSL task. However, there were no differences between musicians and non-musicians for the vSL task. Results from the current study show that musical training is associated with enhancements in extraction of statistical cues only in the auditory domain. PMID:28352223

  3. Auditory artificial grammar learning in macaque and marmoset monkeys.

    PubMed

    Wilson, Benjamin; Slater, Heather; Kikuchi, Yukiko; Milne, Alice E; Marslen-Wilson, William D; Smith, Kenny; Petkov, Christopher I

    2013-11-27

    Artificial grammars (AG) are designed to emulate aspects of the structure of language, and AG learning (AGL) paradigms can be used to study the extent of nonhuman animals' structure-learning capabilities. However, different AG structures have been used with nonhuman animals and are difficult to compare across studies and species. We developed a simple quantitative parameter space, which we used to summarize previous nonhuman animal AGL results. This was used to highlight an under-studied AG with a forward-branching structure, designed to model certain aspects of the nondeterministic nature of word transitions in natural language and animal song. We tested whether two monkey species could learn aspects of this auditory AG. After habituating the monkeys to the AG, analysis of video recordings showed that common marmosets (New World monkeys) differentiated between well formed, correct testing sequences and those violating the AG structure based primarily on simple learning strategies. By comparison, Rhesus macaques (Old World monkeys) showed evidence for deeper levels of AGL. A novel eye-tracking approach confirmed this result in the macaques and demonstrated evidence for more complex AGL. This study provides evidence for a previously unknown level of AGL complexity in Old World monkeys that seems less evident in New World monkeys, which are more distant evolutionary relatives to humans. The findings allow for the development of both marmosets and macaques as neurobiological model systems to study different aspects of AGL at the neuronal level.

  4. Spatial learning while navigating with severely degraded viewing: The role of attention and mobility monitoring.

    PubMed

    Rand, Kristina M; Creem-Regehr, Sarah H; Thompson, William B

    2015-06-01

    The ability to navigate without getting lost is an important aspect of quality of life. In 5 studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2, participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared with degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared with degraded vision. With degraded vision, auditory task performance was better when guided compared with unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing.

  5. Spatial learning while navigating with severely degraded viewing: The role of attention and mobility monitoring

    PubMed Central

    Rand, Kristina M.; Creem-Regehr, Sarah H.; Thompson, William B.

    2015-01-01

    The ability to navigate without getting lost is an important aspect of quality of life. In five studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low-vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2 participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared to degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared to degraded vision. With degraded vision, auditory task performance was better when guided compared to unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing. PMID:25706766

  6. Statistical learning and auditory processing in children with music training: An ERP study.

    PubMed

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne

    2017-07-01

    The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  7. Prolonged maturation of auditory perception and learning in gerbils

    PubMed Central

    Sarro, Emma C.; Sanes, Dan H.

    2011-01-01

    In humans, auditory perception reaches maturity over a broad age range, extending through adolescence. Despite this slow maturation, children are considered to be outstanding learners, suggesting that immature perceptual skills might actually be advantageous to improvement on an acoustic task as a result of training (perceptual learning). Previous non-human studies have not employed an identical task when comparing perceptual performance of young and mature subjects, making it difficult to assess learning. Here, we used an identical procedure on juvenile and adult gerbils to examine the perception of amplitude modulation (AM), a stimulus feature that is an important component of most natural sounds. On average, Adult animals could detect smaller fluctuations in amplitude (i.e. smaller modulation depths) than Juveniles, indicating immature perceptual skills in Juveniles. However, the population variance was much greater for Juveniles, a few animals displaying adult-like AM detection. To determine whether immature perceptual skills facilitated learning, we compared naïve performance on the AM detection task with the amount of improvement following additional training. The amount of improvement in Adults correlated with naïve performance: those with the poorest naïve performance improved the most. In contrast, the naïve performance of Juveniles did not predict the amount of learning. Those Juveniles with immature AM detection thresholds did not display greater learning than Adults. Furthermore, for several of the Juveniles with adult-like thresholds, AM detection deteriorated with repeated testing. Thus, immature perceptual skills in young animals were not associated with greater learning. PMID:20506133

  8. Auditory Model: Effects on Learning under Blocked and Random Practice Schedules

    ERIC Educational Resources Information Center

    Han, Dong-Wook; Shea, Charles H.

    2008-01-01

    An experiment was conducted to determine the impact of an auditory model on blocked, random, and mixed practice schedules of three five-segment timing sequences (relative time constant). We were interested in whether or not the auditory model differentially affected the learning of relative and absolute timing under blocked and random practice.…

  9. Auditory Processing, Linguistic Prosody Awareness, and Word Reading in Mandarin-Speaking Children Learning English

    ERIC Educational Resources Information Center

    Chung, Wei-Lun; Jarmulowicz, Linda; Bidelman, Gavin M.

    2017-01-01

    This study examined language-specific links among auditory processing, linguistic prosody awareness, and Mandarin (L1) and English (L2) word reading in 61 Mandarin-speaking, English-learning children. Three auditory discrimination abilities were measured: pitch contour, pitch interval, and rise time (rate of intensity change at tone onset).…

  10. Auditory Model: Effects on Learning under Blocked and Random Practice Schedules

    ERIC Educational Resources Information Center

    Han, Dong-Wook; Shea, Charles H.

    2008-01-01

    An experiment was conducted to determine the impact of an auditory model on blocked, random, and mixed practice schedules of three five-segment timing sequences (relative time constant). We were interested in whether or not the auditory model differentially affected the learning of relative and absolute timing under blocked and random practice.…

  11. Unraveling the Biology of Auditory Learning: A Cognitive-Sensorimotor-Reward Framework.

    PubMed

    Kraus, Nina; White-Schwoch, Travis

    2015-11-01

    The auditory system is stunning in its capacity for change: a single neuron can modulate its tuning in minutes. Here we articulate a conceptual framework to understand the biology of auditory learning where an animal must engage cognitive, sensorimotor, and reward systems to spark neural remodeling. Central to our framework is a consideration of the auditory system as an integrated whole that interacts with other circuits to guide and refine life in sound. Despite our emphasis on the auditory system, these principles may apply across the nervous system. Understanding neuroplastic changes in both normal and impaired sensory systems guides strategies to improve everyday communication.

  12. Effect of GIS Learning on Spatial Thinking

    ERIC Educational Resources Information Center

    Lee, Jongwon; Bednarz, Robert

    2009-01-01

    A spatial-skills test is used to examine the effect of GIS learning on the spatial thinking ability of college students. Eighty students at a large state university completed pre- and post- spatial-skills tests administered during the 2003 fall semester. Analysis of changes in the students' test scores revealed that GIS learning helped students…

  13. Effect of GIS Learning on Spatial Thinking

    ERIC Educational Resources Information Center

    Lee, Jongwon; Bednarz, Robert

    2009-01-01

    A spatial-skills test is used to examine the effect of GIS learning on the spatial thinking ability of college students. Eighty students at a large state university completed pre- and post- spatial-skills tests administered during the 2003 fall semester. Analysis of changes in the students' test scores revealed that GIS learning helped students…

  14. Spatial auditory regularity encoding and prediction: Human middle-latency and long-latency auditory evoked potentials.

    PubMed

    Cornella, M; Bendixen, A; Grimm, S; Leung, S; Schröger, E; Escera, C

    2015-11-11

    By encoding acoustic regularities present in the environment, the human brain can generate predictions of what is likely to occur next. Recent studies suggest that deviations from encoded regularities are detected within 10-50ms after stimulus onset, as indicated by electrophysiological effects in the middle latency response (MLR) range. This is upstream of previously known long-latency (LLR) signatures of deviance detection such as the mismatch negativity (MMN) component. In the present study, we created predictable and unpredictable contexts to investigate MLR and LLR signatures of the encoding of spatial auditory regularities and the generation of predictions from these regularities. Chirps were monaurally delivered in an either regular (predictable: left-right-left-right) or a random (unpredictable left/right alternation or repetition) manner. Occasional stimulus omissions occurred in both types of sequences. Results showed that the Na component (peaking at 34ms after stimulus onset) was attenuated for regular relative to random chirps, albeit no differences were observed for stimulus omission responses in the same latency range. In the LLR range, larger chirp-and omission-evoked responses were elicited for the regular than for the random condition, and predictability effects were more prominent over the right hemisphere. We discuss our findings in the framework of a hierarchical organization of spatial regularity encoding. This article is part of a Special Issue entitled SI: Prediction and Attention.

  15. Modulation of human auditory spatial scene analysis by transcranial direct current stimulation.

    PubMed

    Lewald, Jörg

    2016-04-01

    Localizing and selectively attending to the source of a sound of interest in a complex auditory environment is an important capacity of the human auditory system. The underlying neural mechanisms have, however, still not been clarified in detail. This issue was addressed by using bilateral bipolar-balanced transcranial direct current stimulation (tDCS) in combination with a task demanding free-field sound localization in the presence of multiple sound sources, thus providing a realistic simulation of the so-called "cocktail-party" situation. With left-anode/right-cathode, but not with right-anode/left-cathode, montage of bilateral electrodes, tDCS over superior temporal gyrus, including planum temporale and auditory cortices, was found to improve the accuracy of target localization in left hemispace. No effects were found for tDCS over inferior parietal lobule or with off-target active stimulation over somatosensory-motor cortex that was used to control for non-specific effects. Also, the absolute error in localization remained unaffected by tDCS, thus suggesting that general response precision was not modulated by brain polarization. This finding can be explained in the framework of a model assuming that brain polarization modulated the suppression of irrelevant sound sources, thus resulting in more effective spatial separation of the target from the interfering sound in the complex auditory scene.

  16. Auditory spatial discrimination by barn owls in simulated echoic conditions

    NASA Astrophysics Data System (ADS)

    Spitzer, Matthew W.; Bala, Avinash D. S.; Takahashi, Terry T.

    2003-03-01

    In humans, directional hearing in reverberant conditions is characterized by a ``precedence effect,'' whereby directional information conveyed by leading sounds dominates perceived location, and listeners are relatively insensitive to directional information conveyed by lagging sounds. Behavioral studies provide evidence of precedence phenomena in a wide range of species. The present study employs a discrimination paradigm, based on habituation and recovery of the pupillary dilation response, to provide quantitative measures of precedence phenomena in the barn owl. As in humans, the owl's ability to discriminate changes in the location of lagging sources is impaired relative to that for single sources. Spatial discrimination of lead sources is also impaired, but to a lesser extent than discrimination of lagging sources. Results of a control experiment indicate that sensitivity to monaural cues cannot account for discrimination of lag source location. Thus, impairment of discrimination ability in the two-source conditions most likely reflects a reduction in sensitivity to binaural directional information. These results demonstrate a similarity of precedence effect phenomena in barn owls and humans, and provide a basis for quantitative comparison with neuronal data from the same species.

  17. Auditory spatial discrimination by barn owls in simulated echoic conditions.

    PubMed

    Spitzer, Matthew W; Bala, Avinash D S; Takahashi, Terry T

    2003-03-01

    In humans, directional hearing in reverberant conditions is characterized by a "precedence effect," whereby directional information conveyed by leading sounds dominates perceived location, and listeners are relatively insensitive to directional information conveyed by lagging sounds. Behavioral studies provide evidence of precedence phenomena in a wide range of species. The present study employs a discrimination paradigm, based on habituation and recovery of the pupillary dilation response, to provide quantitative measures of precedence phenomena in the barn owl. As in humans, the owl's ability to discriminate changes in the location of lagging sources is impaired relative to that for single sources. Spatial discrimination of lead sources is also impaired, but to a lesser extent than discrimination of lagging sources. Results of a control experiment indicate that sensitivity to monaural cues cannot account for discrimination of lag source location. Thus, impairment of discrimination ability in the two-source conditions most likely reflects a reduction in sensitivity to binaural directional information. These results demonstrate a similarity of precedence effect phenomena in barn owls and humans, and provide a basis for quantitative comparison with neuronal data from the same species.

  18. Context-specific reweighting of auditory spatial cues following altered experience during development.

    PubMed

    Keating, Peter; Dahmen, Johannes C; King, Andrew J

    2013-07-22

    Neural systems must weight and integrate different sensory cues in order to make decisions. However, environmental conditions often change over time, altering the reliability of different cues and therefore the optimal way for combining them. To explore how cue integration develops in dynamic environments, we examined the effects on auditory spatial processing of rearing ferrets with localization cues that were modified via a unilateral earplug, interspersed with brief periods of normal hearing. In contrast with control animals, which rely primarily on timing and intensity differences between their two ears to localize sound sources, the juvenile-plugged ferrets developed the ability to localize sounds accurately by relying more on the unchanged spectral localization cues provided by the single normal ear. This adaptive process was paralleled by changes in neuronal responses in the primary auditory cortex, which became relatively more sensitive to these monaural spatial cues. Our behavioral and physiological data demonstrated, however, that the reweighting of different spatial cues disappeared as soon as normal hearing was experienced, showing for the first time that this type of plasticity can be context specific. These results show that developmental changes can be selectively expressed in response to specific acoustic conditions. In this way, the auditory system can develop and simultaneously maintain two distinct models of auditory space and switch between these models depending on the prevailing sensory context. This ability is likely to be critical for maintaining accurate perception in dynamic environments and may point toward novel therapeutic strategies for individuals who experience sensory deficits during development. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Context-Specific Reweighting of Auditory Spatial Cues following Altered Experience during Development

    PubMed Central

    Keating, Peter; Dahmen, Johannes C.; King, Andrew J.

    2013-01-01

    Summary Background Neural systems must weight and integrate different sensory cues in order to make decisions. However, environmental conditions often change over time, altering the reliability of different cues and therefore the optimal way for combining them. To explore how cue integration develops in dynamic environments, we examined the effects on auditory spatial processing of rearing ferrets with localization cues that were modified via a unilateral earplug, interspersed with brief periods of normal hearing. Results In contrast with control animals, which rely primarily on timing and intensity differences between their two ears to localize sound sources, the juvenile-plugged ferrets developed the ability to localize sounds accurately by relying more on the unchanged spectral localization cues provided by the single normal ear. This adaptive process was paralleled by changes in neuronal responses in the primary auditory cortex, which became relatively more sensitive to these monaural spatial cues. Our behavioral and physiological data demonstrated, however, that the reweighting of different spatial cues disappeared as soon as normal hearing was experienced, showing for the first time that this type of plasticity can be context specific. Conclusions These results show that developmental changes can be selectively expressed in response to specific acoustic conditions. In this way, the auditory system can develop and simultaneously maintain two distinct models of auditory space and switch between these models depending on the prevailing sensory context. This ability is likely to be critical for maintaining accurate perception in dynamic environments and may point toward novel therapeutic strategies for individuals who experience sensory deficits during development. PMID:23810532

  20. Spatial Processing Is Frequency Specific in Auditory Cortex But Not in the Midbrain.

    PubMed

    Sollini, Joseph; Mill, Robert; Sumner, Christian J

    2017-07-05

    The cochlea behaves like a bank of band-pass filters, segregating information into different frequency channels. Some aspects of perception reflect processing within individual channels, but others involve the integration of information across them. One instance of this is sound localization, which improves with increasing bandwidth. The processing of binaural cues for sound location has been studied extensively. However, although the advantage conferred by bandwidth is clear, we currently know little about how this additional information is combined to form our percept of space. We investigated the ability of cells in the auditory system of guinea pigs to compare interaural level differences (ILDs), a key localization cue, between tones of disparate frequencies in each ear. Cells in auditory cortex believed to be integral to ILD processing (excitatory from one ear, inhibitory from the other: EI cells) compare ILDs separately over restricted frequency ranges which are not consistent with their monaural tuning. In contrast, cells that are excitatory from both ears (EE cells) show no evidence of frequency-specific processing. Both cell types are explained by a model in which ILDs are computed within separate frequency channels and subsequently combined in a single cortical cell. Interestingly, ILD processing in all inferior colliculus cell types (EE and EI) is largely consistent with processing within single, matched-frequency channels from each ear. Our data suggest a clear constraint on the way that localization cues are integrated: cortical ILD tuning to broadband sounds is a composite of separate, frequency-specific, binaurally sensitive channels. This frequency-specific processing appears after the level of the midbrain.SIGNIFICANCE STATEMENT For some sensory modalities (e.g., somatosensation, vision), the spatial arrangement of the outside world is inherited by the brain from the periphery. The auditory periphery is arranged spatially by frequency, not spatial

  1. Spatial Processing Is Frequency Specific in Auditory Cortex But Not in the Midbrain

    PubMed Central

    2017-01-01

    The cochlea behaves like a bank of band-pass filters, segregating information into different frequency channels. Some aspects of perception reflect processing within individual channels, but others involve the integration of information across them. One instance of this is sound localization, which improves with increasing bandwidth. The processing of binaural cues for sound location has been studied extensively. However, although the advantage conferred by bandwidth is clear, we currently know little about how this additional information is combined to form our percept of space. We investigated the ability of cells in the auditory system of guinea pigs to compare interaural level differences (ILDs), a key localization cue, between tones of disparate frequencies in each ear. Cells in auditory cortex believed to be integral to ILD processing (excitatory from one ear, inhibitory from the other: EI cells) compare ILDs separately over restricted frequency ranges which are not consistent with their monaural tuning. In contrast, cells that are excitatory from both ears (EE cells) show no evidence of frequency-specific processing. Both cell types are explained by a model in which ILDs are computed within separate frequency channels and subsequently combined in a single cortical cell. Interestingly, ILD processing in all inferior colliculus cell types (EE and EI) is largely consistent with processing within single, matched-frequency channels from each ear. Our data suggest a clear constraint on the way that localization cues are integrated: cortical ILD tuning to broadband sounds is a composite of separate, frequency-specific, binaurally sensitive channels. This frequency-specific processing appears after the level of the midbrain. SIGNIFICANCE STATEMENT For some sensory modalities (e.g., somatosensation, vision), the spatial arrangement of the outside world is inherited by the brain from the periphery. The auditory periphery is arranged spatially by frequency, not spatial

  2. Weighting of Spatial and Spectro-Temporal Cues for Auditory Scene Analysis by Human Listeners

    PubMed Central

    Bremen, Peter; Middlebrooks, John C.

    2013-01-01

    The auditory system creates a neuronal representation of the acoustic world based on spectral and temporal cues present at the listener's ears, including cues that potentially signal the locations of sounds. Discrimination of concurrent sounds from multiple sources is especially challenging. The current study is part of an effort to better understand the neuronal mechanisms governing this process, which has been termed “auditory scene analysis”. In particular, we are interested in spatial release from masking by which spatial cues can segregate signals from other competing sounds, thereby overcoming the tendency of overlapping spectra and/or common temporal envelopes to fuse signals with maskers. We studied detection of pulsed tones in free-field conditions in the presence of concurrent multi-tone non-speech maskers. In “energetic” masking conditions, in which the frequencies of maskers fell within the ±1/3-octave band containing the signal, spatial release from masking at low frequencies (∼600 Hz) was found to be about 10 dB. In contrast, negligible spatial release from energetic masking was seen at high frequencies (∼4000 Hz). We observed robust spatial release from masking in broadband “informational” masking conditions, in which listeners could confuse signal with masker even though there was no spectral overlap. Substantial spatial release was observed in conditions in which the onsets of the signal and all masker components were synchronized, and spatial release was even greater under asynchronous conditions. Spatial cues limited to high frequencies (>1500 Hz), which could have included interaural level differences and the better-ear effect, produced only limited improvement in signal detection. Substantially greater improvement was seen for low-frequency sounds, for which interaural time differences are the dominant spatial cue. PMID:23527271

  3. Weighting of spatial and spectro-temporal cues for auditory scene analysis by human listeners.

    PubMed

    Bremen, Peter; Middlebrooks, John C

    2013-01-01

    The auditory system creates a neuronal representation of the acoustic world based on spectral and temporal cues present at the listener's ears, including cues that potentially signal the locations of sounds. Discrimination of concurrent sounds from multiple sources is especially challenging. The current study is part of an effort to better understand the neuronal mechanisms governing this process, which has been termed "auditory scene analysis". In particular, we are interested in spatial release from masking by which spatial cues can segregate signals from other competing sounds, thereby overcoming the tendency of overlapping spectra and/or common temporal envelopes to fuse signals with maskers. We studied detection of pulsed tones in free-field conditions in the presence of concurrent multi-tone non-speech maskers. In "energetic" masking conditions, in which the frequencies of maskers fell within the ± 1/3-octave band containing the signal, spatial release from masking at low frequencies (~600 Hz) was found to be about 10 dB. In contrast, negligible spatial release from energetic masking was seen at high frequencies (~4000 Hz). We observed robust spatial release from masking in broadband "informational" masking conditions, in which listeners could confuse signal with masker even though there was no spectral overlap. Substantial spatial release was observed in conditions in which the onsets of the signal and all masker components were synchronized, and spatial release was even greater under asynchronous conditions. Spatial cues limited to high frequencies (>1500 Hz), which could have included interaural level differences and the better-ear effect, produced only limited improvement in signal detection. Substantially greater improvement was seen for low-frequency sounds, for which interaural time differences are the dominant spatial cue.

  4. Auditory spatial resolution in horizontal, vertical, and diagonal planes

    NASA Astrophysics Data System (ADS)

    Grantham, D. Wesley; Hornsby, Benjamin W. Y.; Erpenbeck, Eric A.

    2003-08-01

    Minimum audible angle (MAA) and minimum audible movement angle (MAMA) thresholds were measured for stimuli in horizontal, vertical, and diagonal (60°) planes. A pseudovirtual technique was employed in which signals were recorded through KEMAR's ears and played back to subjects through insert earphones. Thresholds were obtained for wideband, high-pass, and low-pass noises. Only 6 of 20 subjects obtained wideband vertical-plane MAAs less than 10°, and only these 6 subjects were retained for the complete study. For all three filter conditions thresholds were lowest in the horizontal plane, slightly (but significantly) higher in the diagonal plane, and highest for the vertical plane. These results were similar in magnitude and pattern to those reported by Perrott and Saberi [J. Acoust. Soc. Am. 87, 1728-1731 (1990)] and Saberi and Perrott [J. Acoust. Soc. Am. 88, 2639-2644 (1990)], except that these investigators generally found that thresholds for diagonal planes were as good as those for the horizontal plane. The present results are consistent with the hypothesis that diagonal-plane performance is based on independent contributions from a horizontal-plane system (sensitive to interaural differences) and a vertical-plane system (sensitive to pinna-based spectral changes). Measurements of the stimuli recorded through KEMAR indicated that sources presented from diagonal planes can produce larger interaural level differences (ILDs) in certain frequency regions than would be expected based on the horizontal projection of the trajectory. Such frequency-specific ILD cues may underlie the very good performance reported in previous studies for diagonal spatial resolution. Subjects in the present study could apparently not take advantage of these cues in the diagonal-plane condition, possibly because they did not externalize the images to their appropriate positions in space or possibly because of the absence of a patterned visual field.

  5. Experience alters the spatial tuning of auditory units in the optic tectum during a sensitive period in the barn owl.

    PubMed

    Knudsen, E I

    1985-11-01

    The auditory spatial tuning of bimodal (auditory-visual) units in the optic tectum of the barn owl was altered by raising animals with one ear occluded. Changes in spatial tuning were assessed by comparing the location of a unit's auditory best area with that of its visual receptive field. As shown previously, auditory best areas are aligned with visual receptive fields in the tecta of normal birds (Knudsen, E. I. (1982) J. Neurosci. 2: 1177-1194). It was demonstrated in this study that, when birds were raised with one ear occluded, best areas and visual receptive fields were aligned only as long as the earplug was in place. When the earplug was removed, best areas and visual receptive fields became misaligned, indicating that a change in auditory spatial tuning had taken place during the period of occlusion. However, in a bird that received an earplug as an adult, no such alterations in auditory spatial tuning were observed; even after 1 year of monaural occlusion, auditory best areas and visual receptive fields were misaligned so long as the earplug was in place, and were aligned when the earplug was removed. These results suggest that exposure to abnormal localization cues modifies the auditory spatial tuning of tectal units only during a restricted, sensitive period early in development. After the earplug was removed from a juvenile bird that had been raised with an occluded ear, the initial misalignment between auditory best areas and visual receptive fields decreased gradually over a period of weeks. In contrast, when earplugs were removed from two adult birds that had been raised with monaural occlusions, auditory-visual misalignments persisted for as long as measurements were made, which was up to 1 year after earplug removal. These data indicate that auditory cues become permanently associated with locations in visual space during a critical period which draws to a close at about the age when the animal reaches adulthood. Horseradish peroxidase was

  6. Behavioral semantics of learning and crossmodal processing in auditory cortex: the semantic processor concept.

    PubMed

    Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram

    2011-01-01

    Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of

  7. Time-window-of-integration (TWIN) model for saccadic reaction time: effect of auditory masker level on visual-auditory spatial interaction in elevation.

    PubMed

    Colonius, Hans; Diederich, Adele; Steenken, Rike

    2009-05-01

    Saccadic reaction time (SRT) to a visual target tends to be shorter when auditory stimuli are presented in close temporal and spatial proximity, even when subjects are instructed to ignore the auditory non-target (focused attention paradigm). Previous studies using pairs of visual and auditory stimuli differing in both azimuth and vertical position suggest that the amount of SRT facilitation decreases not with the physical but with the perceivable distance between visual target and auditory non-target. Steenken et al. (Brain Res 1220:150-156, 2008) presented an additional white-noise masker background of three seconds duration. Increasing the masker level had a diametrical effect on SRTs in spatially coincident versus disparate stimulus configurations: saccadic responses to coincident visual-auditory stimuli are slowed down, whereas saccadic responses to disparate stimuli are speeded up. Here we show that the time-window-of-integration model accounts for this observation by variation of a perceivable-distance parameter in the second stage of the model whose value does not depend on stimulus onset asynchrony between target and non-target.

  8. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    NASA Astrophysics Data System (ADS)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  9. Auditory Statistical Learning During Concurrent Physical Exercise and the Tolerance for Pitch, Tempo, and Rhythm Changes.

    PubMed

    Daikoku, Tatsuya; Takahashi, Yuji; Tarumoto, Nagayoshi; Yasuda, Hideki

    2017-09-05

    Previous studies suggest that statistical learning is preserved when acoustic changes are made to auditory sequences. However, statistical learning effects can vary with and without concurrent exercise. The present study examined how concurrent physical exercise influences auditory statistical learning when acoustic and temporal changes are made to auditory sequences. Participants were presented with 500-tone sequences based on a Markov chain while cycling or resting in ignored and attended conditions. Learning effects were evaluated using a familiarity test with four types of short tone series: tone series in which stimuli were same as 500-tone sequence and three tone series in which frequencies, tempo, or rhythm was changed. We suggested that, regardless of attention, concurrent exercise interferes with tolerance in statistical learning for rhythm, rather than tempo changes. There may be specific relationships among statistical learning, rhythm perception, and motor system underlying physical exercise.

  10. Effects of chronic stress on the auditory system and fear learning: an evolutionary approach.

    PubMed

    Dagnino-Subiabre, Alexies

    2013-01-01

    Stress is a complex biological reaction common to all living organisms that allows them to adapt to their environments. Chronic stress alters the dendritic architecture and function of the limbic brain areas that affect memory, learning, and emotional processing. This review summarizes our research about chronic stress effects on the auditory system, providing the details of how we developed the main hypotheses that currently guide our research. The aims of our studies are to (1) determine how chronic stress impairs the dendritic morphology of the main nuclei of the rat auditory system, the inferior colliculus (auditory mesencephalon), the medial geniculate nucleus (auditory thalamus), and the primary auditory cortex; (2) correlate the anatomic alterations with the impairments of auditory fear learning; and (3) investigate how the stress-induced alterations in the rat limbic system may spread to nonlimbic areas, affecting specific sensory system, such as the auditory and olfactory systems, and complex cognitive functions, such as auditory attention. Finally, this article gives a new evolutionary approach to understanding the neurobiology of stress and the stress-related disorders.

  11. Primitive Auditory Memory Is Correlated with Spatial Unmasking That Is Based on Direct-Reflection Integration

    PubMed Central

    Li, Huahui; Kong, Lingzhi; Wu, Xihong; Li, Liang

    2013-01-01

    In reverberant rooms with multiple-people talking, spatial separation between speech sources improves recognition of attended speech, even though both the head-shadowing and interaural-interaction unmasking cues are limited by numerous reflections. It is the perceptual integration between the direct wave and its reflections that bridges the direct-reflection temporal gaps and results in the spatial unmasking under reverberant conditions. This study further investigated (1) the temporal dynamic of the direct-reflection-integration-based spatial unmasking as a function of the reflection delay, and (2) whether this temporal dynamic is correlated with the listeners’ auditory ability to temporally retain raw acoustic signals (i.e., the fast decaying primitive auditory memory, PAM). The results showed that recognition of the target speech against the speech-masker background is a descending exponential function of the delay of the simulated target reflection. In addition, the temporal extent of PAM is frequency dependent and markedly longer than that for perceptual fusion. More importantly, the temporal dynamic of the speech-recognition function is significantly correlated with the temporal extent of the PAM of low-frequency raw signals. Thus, we propose that a chain process, which links the earlier-stage PAM with the later-stage correlation computation, perceptual integration, and attention facilitation, plays a role in spatially unmasking target speech under reverberant conditions. PMID:23658664

  12. Spatial selective attention in a complex auditory environment such as polyphonic music.

    PubMed

    Saupe, Katja; Koelsch, Stefan; Rübsamen, Rudolf

    2010-01-01

    To investigate the influence of spatial information in auditory scene analysis, polyphonic music (three parts in different timbres) was composed and presented in free field. Each part contained large falling interval jumps in the melody and the task of subjects was to detect these events in one part ("target part") while ignoring the other parts. All parts were either presented from the same location (0 degrees; overlap condition) or from different locations (-28 degrees, 0 degrees, and 28 degrees or -56 degrees, 0 degrees, and 56 degrees in the azimuthal plane), with the target part being presented either at 0 degrees or at one of the right-sided locations. Results showed that spatial separation of 28 degrees was sufficient for a significant improvement in target detection (i.e., in the detection of large interval jumps) compared to the overlap condition, irrespective of the position (frontal or right) of the target part. A larger spatial separation of the parts resulted in further improvements only if the target part was lateralized. These data support the notion of improvement in the suppression of interfering signals with spatial sound source separation. Additionally, the data show that the position of the relevant sound source influences auditory performance.

  13. A mechanism for detecting coincidence of auditory and visual spatial signals.

    PubMed

    Orchard-Mills, Emily; Leung, Johahn; Burr, David; Morrone, Maria Concetta; Wufong, Ella; Carlile, Simon; Alais, David

    2013-01-01

    Information about the world is captured by our separate senses, and must be integrated to yield a unified representation. This raises the issue of which signals should be integrated and which should remain separate, as inappropriate integration will lead to misrepresentation and distortions. One strong cue suggesting that separate signals arise from a single source is coincidence, in space and in time. We measured increment thresholds for discriminating spatial intervals defined by pairs of simultaneously presented targets, one flash and one auditory sound, for various separations. We report a 'dipper function', in which thresholds follow a 'U-shaped' curve, with thresholds initially decreasing with spatial interval, and then increasing for larger separations. The presence of a dip in the audiovisual increment-discrimination function is evidence that the auditory and visual signals both input to a common mechanism encoding spatial separation, and a simple filter model with a sigmoidal transduction function simulated the results well. The function of an audiovisual spatial filter may be to detect coincidence, a fundamental cue guiding whether to integrate or segregate.

  14. Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.

    PubMed

    Vercillo, Tiziana; Gori, Monica

    2015-01-01

    The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.

  15. Interference between postural control and spatial vs. non-spatial auditory reaction time tasks in older adults.

    PubMed

    Fuhrman, Susan I; Redfern, Mark S; Jennings, J Richard; Furman, Joseph M

    2015-01-01

    This study investigated whether spatial aspects of an information processing task influence dual-task interference. Two groups (Older/Young) of healthy adults participated in dual-task experiments. Two auditory information processing tasks included a frequency discrimination choice reaction time task (non-spatial task) and a lateralization choice reaction time task (spatial task). Postural tasks included combinations of standing with eyes open or eyes closed on either a fixed floor or a sway-referenced floor. Reaction times and postural sway via center of pressure were recorded. Baseline measures of reaction time and sway were subtracted from the corresponding dual-task results to calculate reaction time task costs and postural task costs. Reaction time task cost increased with eye closure (p = 0.01), sway-referenced flooring (p < 0.0001), and the spatial task (p = 0.04). Additionally, a significant (p = 0.05) task x vision x age interaction indicated that older subjects had a significant vision X task interaction whereas young subjects did not. However, when analyzed by age group, the young group showed minimal differences in interference for the spatial and non-spatial tasks with eyes open, but showed increased interference on the spatial relative to non-spatial task with eyes closed. On the contrary, older subjects demonstrated increased interference on the spatial relative to the non-spatial task with eyes open, but not with eyes closed. These findings suggest that visual-spatial interference may occur in older subjects when vision is used to maintain posture.

  16. The Role of Age and Executive Function in Auditory Category Learning

    PubMed Central

    Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath

    2015-01-01

    Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987

  17. The role of age and executive function in auditory category learning.

    PubMed

    Reetzke, Rachel; Maddox, W Todd; Chandrasekaran, Bharath

    2016-02-01

    Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study was twofold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence and into early adulthood and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. A sample of 60 participants with normal hearing-20 children (age range=7-12years), 21 adolescents (age range=13-19years), and 19 young adults (age range=20-23years)-learned to categorize novel dynamic "ripple" sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied "Gabor" patches in the visual domain. Results reveal that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e., a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning.

  18. Auditory model: effects on learning under blocked and random practice schedules.

    PubMed

    Han, Dong-Wook; Shea, Charles H

    2008-12-01

    An experiment was conducted to determine the impact of an auditory model on blocked, random, and mixed practice schedules of three five-segment timing sequences (relative time constant). We were interested in whether or not the auditory model differentially affected the learning of relative and absolute timing under blocked and random practice. Participants (N = 80) were randomly assigned to one of eight practice conditions, which differed in practice schedule (blocked-blocked, blocked-random, random-blocked, random-random) and auditory model (no model, model). The results indicated that the auditory model enhanced relative timing performance on the delayed retention test regardless of the practice schedule, but it did not influence the learning of absolute timing. Blocked-blocked and blocked-random practice conditions resulted in enhanced relative timing retention performance relative to random-blocked and random-random practice schedules. Random-random and blocked-random practice schedules resulted in better absolute timing than blocked-blocked or random-blocked practice, regardless of the presence or absence of an auditory model during acquisition. Thus, considering both relative and absolute timing, the blocked-random practice condition resulted in overall learning superior to the other practice schedules. The results also suggest that an auditory model produces an added effect on learning relative timing regardless of the practice schedule, but it does not influence the learning of absolute timing.

  19. Auditory Discrimination and Identification in Foreign Language Learning.

    ERIC Educational Resources Information Center

    Weiss, Louis

    The main purpose of this study was to investigate the validity of the assumption that auditory discrimination and pronunciation in a foreign language are closely related. If the assumption were to be well-founded, then it might be possible for foreign language students with high auditory discrimination ability to work alone in the language…

  20. Investigating Verbal and Visual Auditory Learning After Conformal Radiation Therapy for Childhood Ependymoma

    SciTech Connect

    Di Pinto, Marcos; Conklin, Heather M.; Li Chenghong; Xiong Xiaoping; Merchant, Thomas E.

    2010-07-15

    Purpose: The primary objective of this study was to determine whether children with localized ependymoma experience a decline in verbal or visual-auditory learning after conformal radiation therapy (CRT). The secondary objective was to investigate the impact of age and select clinical factors on learning before and after treatment. Methods and Materials: Learning in a sample of 71 patients with localized ependymoma was assessed with the California Verbal Learning Test (CVLT-C) and the Visual-Auditory Learning Test (VAL). Learning measures were administered before CRT, at 6 months, and then yearly for a total of 5 years. Results: There was no significant decline on measures of verbal or visual-auditory learning after CRT; however, younger age, more surgeries, and cerebrospinal fluid shunting did predict lower scores at baseline. There were significant longitudinal effects (improved learning scores after treatment) among older children on the CVLT-C and children that did not receive pre-CRT chemotherapy on the VAL. Conclusion: There was no evidence of global decline in learning after CRT in children with localized ependymoma. Several important implications from the findings include the following: (1) identification of and differentiation among variables with transient vs. long-term effects on learning, (2) demonstration that children treated with chemotherapy before CRT had greater risk of adverse visual-auditory learning performance, and (3) establishment of baseline and serial assessment as critical in ascertaining necessary sensitivity and specificity for the detection of modest effects.

  1. Testing the shared spatial representation of magnitude of auditory and visual intensity.

    PubMed

    Fairhurst, Merle T; Deroy, Ophelia

    2017-03-01

    The largely automatic mapping observed between space and sensory magnitudes suggests representation by a single system across domains. Using stimulus response compatibility tasks, the study confirms that a relative, auditory magnitude such as loudness shows a spatial compatibility effect similar to those evidenced for visual sensory domains but only with comparison tasks and for vertically oriented responses. No effect is seen when participants track changes in amplitude or when responses are oriented vertically. In a bimodal context, the study tested whether the spatial mapping of magnitude in 1 sensory modality (loudness) interacts with the spatial representation of magnitude in another sense (luminance). Observed interactions across modalities suggest overlap of magnitude representation across distinct sensory domains, whereas the absence of an effect for dynamic changes in loudness suggests that it is useful for decisions to act on 1 of several objects rather than for tracking magnitude changes in 1 object. (PsycINFO Database Record

  2. Cortical processing of speech sounds and their analogues in a spatial auditory environment.

    PubMed

    Palomäki, Kalle J; Tiitinen, Hannu; Mäkinen, Ville; May, Patrick; Alku, Paavo

    2002-08-01

    We used magnetoencephalographic (MEG) measurements to study how speech sounds presented in a realistic spatial sound environment are processed in human cortex. A spatial sound environment was created by utilizing head-related transfer functions (HRTFs), and using a vowel, a pseudo-vowel, and a wide-band noise burst as stimuli. The behaviour of the most prominent auditory response, the cortically generated N1m, was investigated above the left and right hemisphere. We found that the N1m responses elicited by the vowel and by the pseudo-vowel were much larger in amplitude than those evoked by the noise burst. Corroborating previous observations, we also found that cortical activity reflecting the processing of spatial sound was more pronounced in the right than in the left hemisphere for all of the stimulus types and that both hemispheres exhibited contralateral tuning to sound direction.

  3. Echolocation, vocal learning, auditory localization and the relative size of the avian auditory midbrain nucleus (MLd).

    PubMed

    Iwaniuk, Andrew N; Clayton, Dale H; Wylie, Douglas R W

    2006-02-28

    The avian nucleus mesencephalicus lateralis, pars dorsalis (MLd) is an auditory midbrain nucleus that plays a significant role in a variety of acoustically mediated behaviours. We tested whether MLd is hypertrophied in species with auditory specializations: owls, the vocal learners and echolocaters. Using both conventional and phylogenetically corrected statistics, we find that the echolocating species have a marginally enlarged MLd, but it does not differ significantly from auditory generalists, such as pigeons, raptors and chickens. Similarly, all of the vocal learners tend to have relatively small MLds. Finally, MLd is significantly larger in owls compared to all other birds regardless of how the size of MLd is scaled. This enlargement is far more marked in asymmetrically eared owls than symmetrically eared owls. Variation in MLd size therefore appears to be correlated with some auditory specializations, but not others. Whether an auditory specialist possesses a hypertrophied MLd appears to be depend upon their hearing range and sensitivity as well as the ability to resolve small azimuthal and elevational angles when determining the location of a sound. As a result, the only group to possess a significantly large MLd consistently across our analyses is the owls. Unlike other birds surveyed, owls have a battery of peripheral and other central auditory system specializations that correlate well with their hearing abilities. The lack of differences among the generalists, vocal learners and echolocaters therefore reflects an overall similarity in hearing abilities, despite the specific life history requirements of each specialization and species. This correlation between the size of a neural structure and the sensitivity of a perceptual domain parallels a similar pattern in mammals.

  4. Hearing impairment induces frequency-specific adjustments in auditory spatial tuning in the optic tectum of young owls.

    PubMed

    Gold, J I; Knudsen, E I

    1999-11-01

    Bimodal, auditory-visual neurons in the optic tectum of the barn owl are sharply tuned for sound source location. The auditory receptive fields (RFs) of these neurons are restricted in space primarily as a consequence of their tuning for interaural time differences and interaural level differences across broad ranges of frequencies. In this study, we examined the extent to which frequency-specific features of early auditory experience shape the auditory spatial tuning of these neurons. We manipulated auditory experience by implanting in one ear canal an acoustic filtering device that altered the timing and level of sound reaching the eardrum in a frequency-dependent fashion. We assessed the auditory spatial tuning at individual tectal sites in normal owls and in owls raised with the filtering device. At each site, we measured a family of auditory RFs using broadband sound and narrowband sounds with different center frequencies both with and without the device in place. In normal owls, the narrowband RFs for a given site all included a common region of space that corresponded with the broadband RF and aligned with the site's visual RF. Acute insertion of the filtering device in normal owls shifted the locations of the narrowband RFs away from the visual RF, the magnitude and direction of the shifts depending on the frequency of the stimulus. In contrast, in owls that were raised wearing the device, narrowband and broadband RFs were aligned with visual RFs so long as the device was in the ear but not after it was removed, indicating that auditory spatial tuning had been adaptively altered by experience with the device. The frequency tuning of tectal neurons in device-reared owls was also altered from normal. The results demonstrate that experience during development adaptively modifies the representation of auditory space in the barn owl's optic tectum in a frequency-dependent manner.

  5. Dissociable Memory- and Response-Related Activity in Parietal Cortex During Auditory Spatial Working Memory

    PubMed Central

    Alain, Claude; Shen, Dawei; Yu, He; Grady, Cheryl

    2010-01-01

    Attending and responding to sound location generates increased activity in parietal cortex which may index auditory spatial working memory and/or goal-directed action. Here, we used an n-back task (Experiment 1) and an adaptation paradigm (Experiment 2) to distinguish memory-related activity from that associated with goal-directed action. In Experiment 1, participants indicated, in separate blocks of trials, whether the incoming stimulus was presented at the same location as in the previous trial (1-back) or two trials ago (2-back). Prior to a block of trials, participants were told to use their left or right index finger. Accuracy and reaction times were worse for the 2-back than for the 1-back condition. The analysis of functional magnetic resonance imaging data revealed greater sustained task-related activity in the inferior parietal lobule (IPL) and superior frontal sulcus during 2-back than 1-back after accounting for response-related activity elicited by the targets. Target detection and response execution were also associated with enhanced activity in the IPL bilaterally, though the activation was anterior to that associated with sustained task-related activity. In Experiment 2, we used an event-related design in which participants listened (no response required) to trials that comprised four sounds presented either at the same location or at four different locations. We found larger IPL activation for changes in sound location than for sounds presented at the same location. The IPL activation overlapped with that observed during the auditory spatial working memory task. Together, these results provide converging evidence supporting the role of parietal cortex in auditory spatial working memory which can be dissociated from response selection and execution. PMID:21833258

  6. How spatial release from masking may fail to function in a highly directional auditory system

    PubMed Central

    Lee, Norman; Mason, Andrew C

    2017-01-01

    Spatial release from masking (SRM) occurs when spatial separation between a signal and masker decreases masked thresholds. The mechanically-coupled ears of Ormia ochracea are specialized for hyperacute directional hearing, but the possible role of SRM, or whether such specializations exhibit limitations for sound source segregation, is unknown. We recorded phonotaxis to a cricket song masked by band-limited noise. With a masker, response thresholds increased and localization was diverted away from the signal and masker. Increased separation from 6° to 90° did not decrease response thresholds or improve localization accuracy, thus SRM does not operate in this range of spatial separations. Tympanal vibrations and auditory nerve responses reveal that localization errors were consistent with changes in peripheral coding of signal location and flies localized towards the ear with better signal detection. Our results demonstrate that, in a mechanically coupled auditory system, specialization for directional hearing does not contribute to source segregation. DOI: http://dx.doi.org/10.7554/eLife.20731.001 PMID:28425912

  7. How does experience modulate auditory spatial processing in individuals with blindness?

    PubMed

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-jia; Li, Jian-jun; Ting, Kin-hung; Wang, Jun; Lee, Tatia M C

    2015-05-01

    Comparing early- and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early- and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel "Bat-ears" sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the late-onset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.

  8. Auditory Spatial Acuity Approximates the Resolving Power of Space-Specific Neurons

    PubMed Central

    Bala, Avinash D. S.; Spitzer, Matthew W.; Takahashi, Terry T.

    2007-01-01

    The relationship between neuronal acuity and behavioral performance was assessed in the barn owl (Tyto alba), a nocturnal raptor renowned for its ability to localize sounds and for the topographic representation of auditory space found in the midbrain. We measured discrimination of sound-source separation using a newly developed procedure involving the habituation and recovery of the pupillary dilation response. The smallest discriminable change of source location was found to be about two times finer in azimuth than in elevation. Recordings from neurons in its midbrain space map revealed that their spatial tuning, like the spatial discrimination behavior, was also better in azimuth than in elevation by a factor of about two. Because the PDR behavioral assay is mediated by the same circuitry whether discrimination is assessed in azimuth or in elevation, this difference in vertical and horizontal acuity is likely to reflect a true difference in sensory resolution, without additional confounding effects of differences in motor performance in the two dimensions. Our results, therefore, are consistent with the hypothesis that the acuity of the midbrain space map determines auditory spatial discrimination. PMID:17668055

  9. A blueprint for vocal learning: auditory predispositions from brains to genomes

    PubMed Central

    Wheatcroft, David; Qvarnström, Anna

    2015-01-01

    Memorizing and producing complex strings of sound are requirements for spoken human language. We share these behaviours with likely more than 4000 species of songbirds, making birds our primary model for studying the cognitive basis of vocal learning and, more generally, an important model for how memories are encoded in the brain. In songbirds, as in humans, the sounds that a juvenile learns later in life depend on auditory memories formed early in development. Experiments on a wide variety of songbird species suggest that the formation and lability of these auditory memories, in turn, depend on auditory predispositions that stimulate learning when a juvenile hears relevant, species-typical sounds. We review evidence that variation in key features of these auditory predispositions are determined by variation in genes underlying the development of the auditory system. We argue that increased investigation of the neuronal basis of auditory predispositions expressed early in life in combination with modern comparative genomic approaches may provide insights into the evolution of vocal learning. PMID:26246333

  10. Learning to produce speech with an altered vocal tract: The role of auditory feedback

    NASA Astrophysics Data System (ADS)

    Jones, Jeffery A.; Munhall, K. G.

    2003-01-01

    Modifying the vocal tract alters a speaker's previously learned acoustic-articulatory relationship. This study investigated the contribution of auditory feedback to the process of adapting to vocal-tract modifications. Subjects said the word /tas/ while wearing a dental prosthesis that extended the length of their maxillary incisor teeth. The prosthesis affected /s/ productions and the subjects were asked to learn to produce ``normal'' /s/'s. They alternately received normal auditory feedback and noise that masked their natural feedback during productions. Acoustic analysis of the speakers' /s/ productions showed that the distribution of energy across the spectra moved toward that of normal, unperturbed production with increased experience with the prosthesis. However, the acoustic analysis did not show any significant differences in learning dependent on auditory feedback. By contrast, when naive listeners were asked to rate the quality of the speakers' utterances, productions made when auditory feedback was available were evaluated to be closer to the subjects' normal productions than when feedback was masked. The perceptual analysis showed that speakers were able to use auditory information to partially compensate for the vocal-tract modification. Furthermore, utterances produced during the masked conditions also improved over a session, demonstrating that the compensatory articulations were learned and available after auditory feedback was removed.

  11. Connecting mathematics learning through spatial reasoning

    NASA Astrophysics Data System (ADS)

    Mulligan, Joanne; Woolcott, Geoffrey; Mitchelmore, Michael; Davis, Brent

    2017-07-01

    Spatial reasoning, an emerging transdisciplinary area of interest to mathematics education research, is proving integral to all human learning. It is particularly critical to science, technology, engineering and mathematics (STEM) fields. This project will create an innovative knowledge framework based on spatial reasoning that identifies new pathways for mathematics learning, pedagogy and curriculum. Novel analytical tools will map the unknown complex systems linking spatial and mathematical concepts. It will involve the design, implementation and evaluation of a Spatial Reasoning Mathematics Program (SRMP) in Grades 3 to 5. Benefits will be seen through development of critical spatial skills for students, increased teacher capability and informed policy and curriculum across STEM education.

  12. Influence of age, spatial memory, and ocular fixation on localization of auditory, visual, and bimodal targets by human subjects.

    PubMed

    Dobreva, Marina S; O'Neill, William E; Paige, Gary D

    2012-12-01

    A common complaint of the elderly is difficulty identifying and localizing auditory and visual sources, particularly in competing background noise. Spatial errors in the elderly may pose challenges and even threats to self and others during everyday activities, such as localizing sounds in a crowded room or driving in traffic. In this study, we investigated the influence of aging, spatial memory, and ocular fixation on the localization of auditory, visual, and combined auditory-visual (bimodal) targets. Head-restrained young and elderly subjects localized targets in a dark, echo-attenuated room using a manual laser pointer. Localization accuracy and precision (repeatability) were quantified for both ongoing and transient (remembered) targets at response delays up to 10 s. Because eye movements bias auditory spatial perception, localization was assessed under target fixation (eyes free, pointer guided by foveal vision) and central fixation (eyes fixed straight ahead, pointer guided by peripheral vision) conditions. Spatial localization across the frontal field in young adults demonstrated (1) horizontal overshoot and vertical undershoot for ongoing auditory targets under target fixation conditions, but near-ideal horizontal localization with central fixation; (2) accurate and precise localization of ongoing visual targets guided by foveal vision under target fixation that degraded when guided by peripheral vision during central fixation; (3) overestimation in horizontal central space (±10°) of remembered auditory, visual, and bimodal targets with increasing response delay. In comparison with young adults, elderly subjects showed (1) worse precision in most paradigms, especially when localizing with peripheral vision under central fixation; (2) greatly impaired vertical localization of auditory and bimodal targets; (3) increased horizontal overshoot in the central field for remembered visual and bimodal targets across response delays; (4) greater vulnerability to

  13. Elevated Depressive Symptoms Enhance Reflexive but not Reflective Auditory Category Learning

    PubMed Central

    Maddox, W. Todd; Chandrasekaran, Bharath; Smayda, Kirsten; Yi, Han-Gyol; Koslov, Seth; Beevers, Christopher G.

    2014-01-01

    In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory

  14. Emerging Auditory Systems: Implications for Instructing Handicapped Children. Auditory Learning Monograph Series 7.

    ERIC Educational Resources Information Center

    Anderson, William A.

    Four auditory delivery systems and their implications for instructing handicapped children are discussed. Outlined are six potential benefits of applying technologies to education, such as making education more productive. Pointed out are potential uses of sub-channel radio (such as programming for the blind), of broadband communication (such as…

  15. Learning of new sound categories shapes neural response patterns in human auditory cortex.

    PubMed

    Ley, Anke; Vroomen, Jean; Hausfeld, Lars; Valente, Giancarlo; De Weerd, Peter; Formisano, Elia

    2012-09-19

    The formation of new sound categories is fundamental to everyday goal-directed behavior. Categorization requires the abstraction of discrete classes from continuous physical features as required by context and task. Electrophysiology in animals has shown that learning to categorize novel sounds alters their spatiotemporal neural representation at the level of early auditory cortex. However, functional magnetic resonance imaging (fMRI) studies so far did not yield insight into the effects of category learning on sound representations in human auditory cortex. This may be due to the use of overlearned speech-like categories and fMRI subtraction paradigms, leading to insufficient sensitivity to distinguish the responses to learning-induced, novel sound categories. Here, we used fMRI pattern analysis to investigate changes in human auditory cortical response patterns induced by category learning. We created complex novel sound categories and analyzed distributed activation patterns during passive listening to a sound continuum before and after category learning. We show that only after training, sound categories could be successfully decoded from early auditory areas and that learning-induced pattern changes were specific to the category-distinctive sound feature (i.e., pitch). Notably, the similarity between fMRI response patterns for the sound continuum mirrored the sigmoid shape of the behavioral category identification function. Our results indicate that perceptual representations of novel sound categories emerge from neural changes at early levels of the human auditory processing hierarchy.

  16. Experience-dependent learning of auditory temporal resolution: evidence from Carnatic-trained musicians.

    PubMed

    Mishra, Srikanta K; Panda, Manasa R

    2014-01-22

    Musical training and experience greatly enhance the cortical and subcortical processing of sounds, which may translate to superior auditory perceptual acuity. Auditory temporal resolution is a fundamental perceptual aspect that is critical for speech understanding in noise in listeners with normal hearing, auditory disorders, cochlear implants, and language disorders, yet very few studies have focused on music-induced learning of temporal resolution. This report demonstrates that Carnatic musical training and experience have a significant impact on temporal resolution assayed by gap detection thresholds. This experience-dependent learning in Carnatic-trained musicians exhibits the universal aspects of human perception and plasticity. The present work adds the perceptual component to a growing body of neurophysiological and imaging studies that suggest plasticity of the peripheral auditory system at the level of the brainstem. The present work may be intriguing to researchers and clinicians alike interested in devising cross-cultural training regimens to alleviate listening-in-noise difficulties.

  17. A Latent Consolidation Phase in Auditory Identification Learning: Time in the Awake State Is Sufficient

    ERIC Educational Resources Information Center

    Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi

    2005-01-01

    Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…

  18. A Latent Consolidation Phase in Auditory Identification Learning: Time in the Awake State Is Sufficient

    ERIC Educational Resources Information Center

    Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi

    2005-01-01

    Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…

  19. Spatial organization of excitatory synaptic inputs to layer 4 neurons in mouse primary auditory cortex

    PubMed Central

    Kratz, Megan B.; Manis, Paul B.

    2015-01-01

    Layer 4 (L4) of primary auditory cortex (A1) receives a tonotopically organized projection from the medial geniculate nucleus of the thalamus. However, individual neurons in A1 respond to a wider range of sound frequencies than would be predicted by their thalamic input, which suggests the existence of cross-frequency intracortical networks. We used laser scanning photostimulation and uncaging of glutamate in brain slices of mouse A1 to characterize the spatial organization of intracortical inputs to L4 neurons. Slices were prepared to include the entire tonotopic extent of A1. We find that L4 neurons receive local vertically organized (columnar) excitation from layers 2 through 6 (L6) and horizontally organized excitation primarily from L4 and L6 neurons in regions centered ~300–500 μm caudal and/or rostral to the cell. Excitatory horizontal synaptic connections from layers 2 and 3 were sparse. The origins of horizontal projections from L4 and L6 correspond to regions in the tonotopic map that are approximately an octave away from the target cell location. Such spatially organized lateral connections may contribute to the detection and processing of auditory objects with specific spectral structures. PMID:25972787

  20. Learning strategy trumps motivational level in determining learning-induced auditory cortical plasticity.

    PubMed

    Bieszczad, Kasia M; Weinberger, Norman M

    2010-02-01

    Associative memory for auditory-cued events involves specific plasticity in the primary auditory cortex (A1) that facilitates responses to tones which gain behavioral significance, by modifying representational parameters of sensory coding. Learning strategy, rather than the amount or content of learning, can determine this learning-induced cortical (high order) associative representational plasticity (HARP). Thus, tone-contingent learning with signaled errors can be accomplished either by (1) responding only during tone duration ("tone-duration" strategy, T-Dur), or (2) responding from tone onset until receiving an error signal for responses made immediately after tone offset ("tone-onset-to-error", TOTE). While rats using both strategies achieve the same high level of performance, only those using the TOTE strategy develop HARP, viz., frequency-specific decreased threshold (increased sensitivity) and decreased bandwidth (increased selectivity) (Berlau & Weinberger, 2008). The present study challenged the generality of learning strategy by determining if high motivation dominates in the formation of HARP. Two groups of adult male rats were trained to bar-press during a 5.0kHz (10s, 70dB) tone for a water reward under either high (HiMot) or moderate (ModMot) levels of motivation. The HiMot group achieved a higher level of correct performance. However, terminal mapping of A1 showed that only the ModMot group developed HARP, i.e., increased sensitivity and selectivity in the signal-frequency band. Behavioral analysis revealed that the ModMot group used the TOTE strategy while HiMot subjects used the T-Dur strategy. Thus, type of learning strategy, not level of learning or motivation, is dominant for the formation of cortical plasticity.

  1. The influence of acoustic reflections from diffusive architectural surfaces on spatial auditory perception

    NASA Astrophysics Data System (ADS)

    Robinson, Philip W.

    This thesis addresses the effect of reflections from diffusive architectural surfaces on the perception of echoes and on auditory spatial resolution. Diffusive architectural surfaces play an important role in performance venue design for architectural expression and proper sound distribution. Extensive research has been devoted to the prediction and measurement of the spatial dispersion. However, previous psychoacoustic research on perception of reflections and the precedence effect has focused on specular reflections. This study compares the echo threshold of specular reflections, against those for reflections from realistic architectural surfaces, and against synthesized reflections that isolate individual qualities of reflections from diffusive surfaces, namely temporal dispersion and spectral coloration. In particular, the activation of the precedence effect, as indicated by the echo threshold is measured. Perceptual tests are conducted with direct sound, and simulated or measured reflections with varying temporal dispersion. The threshold for reflections from diffusive architectural surfaces is found to be comparable to that of a specular re ection of similar energy rather than similar amplitude. This is surprising because the amplitude of the dispersed re ection is highly attenuated, and onset cues are reduced. This effect indicates that the auditory system is integrating re ection response energy dispersed over many milliseconds into a single stream. Studies on the effect of a single diffuse reflection are then extended to a full architectural enclosure with various surface properties. This research utilizes auralizations from measured and simulated performance venues to investigate spatial discrimination of multiple acoustic sources in rooms. It is found that discriminating the lateral arrangement of two sources is possible at narrower separation angles when reflections come from at rather than diffusive surfaces. Additionally, subjective impressions are

  2. Modulation of Auditory Spatial Attention by Angry Prosody: An fMRI Auditory Dot-Probe Study

    PubMed Central

    Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier

    2016-01-01

    Emotional stimuli have been shown to modulate attentional orienting through signals sent by subcortical brain regions that modulate visual perception at early stages of processing. Fewer studies, however, have investigated a similar effect of emotional stimuli on attentional orienting in the auditory domain together with an investigation of brain regions underlying such attentional modulation, which is the general aim of the present study. Therefore, we used an original auditory dot-probe paradigm involving simultaneously presented neutral and angry non-speech vocal utterances lateralized to either the left or the right auditory space, immediately followed by a short and lateralized single sine wave tone presented in the same (valid trial) or in the opposite space as the preceding angry voice (invalid trial). Behavioral results showed an expected facilitation effect for target detection during valid trials while functional data showed greater activation in the middle and posterior superior temporal sulci (STS) and in the medial frontal cortex for valid vs. invalid trials. The use of reaction time facilitation [absolute value of the Z-score of valid-(invalid+neutral)] as a group covariate extended enhanced activity in the amygdalae, auditory thalamus, and visual cortex. Taken together, our results suggest the involvement of a large and distributed network of regions among which the STS, thalamus, and amygdala are crucial for the decoding of angry prosody, as well as for orienting and maintaining attention within an auditory space that was previously primed by a vocal emotional event. PMID:27242420

  3. Oscillatory alpha modulations in right auditory regions reflect the validity of acoustic cues in an auditory spatial attention task.

    PubMed

    Weisz, Nathan; Müller, Nadia; Jatzev, Sabine; Bertrand, Olivier

    2014-10-01

    Anticipation of targets in the left or right hemifield leads to alpha modulations in posterior brain areas. Recently using magnetoencephalography, we showed increased right auditory alpha activity when attention was cued ipsilaterally. Here, we investigated the issue how cue validity itself influences oscillatory alpha activity. Acoustic cues were presented either to the right or left ear, followed by a compound dichotically presented target plus distractor. The preceding cue was either informative (75% validity) or uninformative (50%) about the location of the upcoming target. Cue validity × side-related alpha modulations were identified in pre- and posttarget periods in a right lateralized network, comprising auditory and nonauditory regions. This replicates and extends our previous finding of the right hemispheric dominance of auditory attentional modulations. Importantly, effective connectivity analysis showed that, in the pretarget period, this effect is accompanied by a pronounced and time-varying connectivity pattern of the right auditory cortex to the right intraparietal sulcus (IPS), with influence of IPS on superior temporal gyrus dominating at earlier intervals of the cue-target period. Our study underlines the assumption that alpha oscillations may play a similar functional role in auditory cortical regions as reported in other sensory modalities and suggests that these effects may be mediated via IPS.

  4. Modulation of Auditory Spatial Attention by Angry Prosody: An fMRI Auditory Dot-Probe Study.

    PubMed

    Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier

    2016-01-01

    Emotional stimuli have been shown to modulate attentional orienting through signals sent by subcortical brain regions that modulate visual perception at early stages of processing. Fewer studies, however, have investigated a similar effect of emotional stimuli on attentional orienting in the auditory domain together with an investigation of brain regions underlying such attentional modulation, which is the general aim of the present study. Therefore, we used an original auditory dot-probe paradigm involving simultaneously presented neutral and angry non-speech vocal utterances lateralized to either the left or the right auditory space, immediately followed by a short and lateralized single sine wave tone presented in the same (valid trial) or in the opposite space as the preceding angry voice (invalid trial). Behavioral results showed an expected facilitation effect for target detection during valid trials while functional data showed greater activation in the middle and posterior superior temporal sulci (STS) and in the medial frontal cortex for valid vs. invalid trials. The use of reaction time facilitation [absolute value of the Z-score of valid-(invalid+neutral)] as a group covariate extended enhanced activity in the amygdalae, auditory thalamus, and visual cortex. Taken together, our results suggest the involvement of a large and distributed network of regions among which the STS, thalamus, and amygdala are crucial for the decoding of angry prosody, as well as for orienting and maintaining attention within an auditory space that was previously primed by a vocal emotional event.

  5. Time course and cost of misdirecting auditory spatial attention in younger and older adults.

    PubMed

    Singh, Gurjit; Pichora-Fuller, M Kathleen; Schneider, Bruce A

    2013-01-01

    The effects of directing, switching, and misdirecting auditory spatial attention in a complex listening situation were investigated in 8 younger and 8 older listeners with normal-hearing sensitivity below 4 kHz. In two companion experiments, a target sentence was presented from one spatial location and two competing sentences were presented simultaneously, one from each of two different locations. Pretrial, listeners were informed of the call-sign cue that identified which of the three sentences was the target and of the probability of the target sentence being presented from each of the three possible locations. Four different probability conditions varied in the likelihood of the target being presented at the left, center, and right locations. In Experiment 1, four timing conditions were tested: the original (unedited) sentences (which contained about 300 msec of filler speech between the call-sign cue and the onset of the target words), or modified (edited) sentences with silent pauses of 0, 150, or 300 msec replacing the filler speech. In Experiment 2, when the cued sentence was presented from an unlikely (side) listening location, for half of the trials the listener's task was to report target words from the cued sentence (cue condition); for the remaining trials, the listener's task was to report target words from the sentence presented from the opposite, unlikely (side) listening location (anticue condition). In Experiment 1, for targets presented from the likely (center) location, word identification was better for the unedited than for modified sentences. For targets presented from unlikely (side) locations, word identification was better when there was more time between the call-sign cue and target words. All listeners benefited similarly from the availability of more compared with less time and the presentation of continuous compared with interrupted speech. In Experiment 2, the key finding was that age-related performance deficits were observed in

  6. Unilateral Auditory Cortex Lesions Impair or Improve Discrimination Learning of Amplitude Modulated Sounds, Depending on Lesion Side

    PubMed Central

    Schulze, Holger; Deutscher, Anke; Tziridis, Konstantin; Scheich, Henning

    2014-01-01

    A fundamental principle of brain organization is bilateral symmetry of structures and functions. For spatial sensory and motor information processing, this organization is generally plausible subserving orientation and coordination of a bilaterally symmetric body. However, breaking of the symmetry principle is often seen for functions that depend on convergent information processing and lateralized output control, e.g. left hemispheric dominance for the linguistic speech system. Conversely, a subtle splitting of functions into hemispheres may occur if peripheral information from symmetric sense organs is partly redundant, e.g. auditory pattern recognition, and therefore allows central conceptualizations of complex stimuli from different feature viewpoints, as demonstrated e.g. for hemispheric analysis of frequency modulations in auditory cortex (AC) of mammals including humans. Here we demonstrate that discrimination learning of rapidly but not of slowly amplitude modulated tones is non-uniformly distributed across both hemispheres: While unilateral ablation of left AC in gerbils leads to impairment of normal discrimination learning of rapid amplitude modulations, right side ablations lead to improvement over normal learning. These results point to a rivalry interaction between both ACs in the intact brain where the right side competes with and weakens learning capability maximally attainable by the dominant left side alone. PMID:24466338

  7. Auditory Spatial Discrimination and the Mismatch Negativity Response in Hearing-Impaired Individuals

    PubMed Central

    Cai, Yuexin; Zheng, Yiqing; Liang, Maojin; Zhao, Fei; Yu, Guangzheng; Liu, Yu; Chen, Yuebo; Chen, Guisheng

    2015-01-01

    The aims of the present study were to investigate the ability of hearing-impaired (HI) individuals with different binaural hearing conditions to discriminate spatial auditory-sources at the midline and lateral positions, and to explore the possible central processing mechanisms by measuring the minimal audible angle (MAA) and mismatch negativity (MMN) response. To measure MAA at the left/right 0°, 45° and 90° positions, 12 normal-hearing (NH) participants and 36 patients with sensorineural hearing loss, which included 12 patients with symmetrical hearing loss (SHL) and 24 patients with asymmetrical hearing loss (AHL) [12 with unilateral hearing loss on the left (UHLL) and 12 with unilateral hearing loss on the right (UHLR)] were recruited. In addition, 128-electrode electroencephalography was used to record the MMN response in a separate group of 60 patients (20 UHLL, 20 UHLR and 20 SHL patients) and 20 NH participants. The results showed MAA thresholds of the NH participants to be significantly lower than the HI participants. Also, a significantly smaller MAA threshold was obtained at the midline position than at the lateral position in both NH and SHL groups. However, in the AHL group, MAA threshold for the 90° position on the affected side was significantly smaller than the MMA thresholds obtained at other positions. Significantly reduced amplitudes and prolonged latencies of the MMN were found in the HI groups compared to the NH group. In addition, contralateral activation was found in the UHL group for sounds emanating from the 90° position on the affected side and in the NH group. These findings suggest that the abilities of spatial discrimination at the midline and lateral positions vary significantly in different hearing conditions. A reduced MMN amplitude and prolonged latency together with bilaterally symmetrical cortical activations over the auditory hemispheres indicate possible cortical compensatory changes associated with poor behavioral spatial

  8. Localized brain activation related to the strength of auditory learning in a parrot.

    PubMed

    Eda-Fujiwara, Hiroko; Imagawa, Takuya; Matsushita, Masanori; Matsuda, Yasushi; Takeuchi, Hiro-Aki; Satoh, Ryohei; Watanabe, Aiko; Zandbergen, Matthijs A; Manabe, Kazuchika; Kawashima, Takashi; Bolhuis, Johan J

    2012-01-01

    Parrots and songbirds learn their vocalizations from a conspecific tutor, much like human infants acquire spoken language. Parrots can learn human words and it has been suggested that they can use them to communicate with humans. The caudomedial pallium in the parrot brain is homologous with that of songbirds, and analogous to the human auditory association cortex, involved in speech processing. Here we investigated neuronal activation, measured as expression of the protein product of the immediate early gene ZENK, in relation to auditory learning in the budgerigar (Melopsittacus undulatus), a parrot. Budgerigar males successfully learned to discriminate two Japanese words spoken by another male conspecific. Re-exposure to the two discriminanda led to increased neuronal activation in the caudomedial pallium, but not in the hippocampus, compared to untrained birds that were exposed to the same words, or were not exposed to words. Neuronal activation in the caudomedial pallium of the experimental birds was correlated significantly and positively with the percentage of correct responses in the discrimination task. These results suggest that in a parrot, the caudomedial pallium is involved in auditory learning. Thus, in parrots, songbirds and humans, analogous brain regions may contain the neural substrate for auditory learning and memory.

  9. Auditory attention strategy depends on target linguistic properties and spatial configurationa)

    PubMed Central

    McCloy, Daniel R.; Lee, Adrian K. C.

    2015-01-01

    Whether crossing a busy intersection or attending a large dinner party, listeners sometimes need to attend to multiple spatially distributed sound sources or streams concurrently. How they achieve this is not clear—some studies suggest that listeners cannot truly simultaneously attend to separate streams, but instead combine attention switching with short-term memory to achieve something resembling divided attention. This paper presents two oddball detection experiments designed to investigate whether directing attention to phonetic versus semantic properties of the attended speech impacts listeners' ability to divide their auditory attention across spatial locations. Each experiment uses four spatially distinct streams of monosyllabic words, variation in cue type (providing phonetic or semantic information), and requiring attention to one or two locations. A rapid button-press response paradigm is employed to minimize the role of short-term memory in performing the task. Results show that differences in the spatial configuration of attended and unattended streams interact with linguistic properties of the speech streams to impact performance. Additionally, listeners may leverage phonetic information to make oddball detection judgments even when oddballs are semantically defined. Both of these effects appear to be mediated by the overall complexity of the acoustic scene. PMID:26233011

  10. Sensorimotor learning in children and adults: Exposure to frequency-altered auditory feedback during speech production.

    PubMed

    Scheerer, N E; Jacobson, D S; Jones, J A

    2016-02-09

    Auditory feedback plays an important role in the acquisition of fluent speech; however, this role may change once speech is acquired and individuals no longer experience persistent developmental changes to the brain and vocal tract. For this reason, we investigated whether the role of auditory feedback in sensorimotor learning differs across children and adult speakers. Participants produced vocalizations while they heard their vocal pitch predictably or unpredictably shifted downward one semitone. The participants' vocal pitches were measured at the beginning of each vocalization, before auditory feedback was available, to assess the extent to which the deviant auditory feedback modified subsequent speech motor commands. Sensorimotor learning was observed in both children and adults, with participants' initial vocal pitch increasing following trials where they were exposed to predictable, but not unpredictable, frequency-altered feedback. Participants' vocal pitch was also measured across each vocalization, to index the extent to which the deviant auditory feedback was used to modify ongoing vocalizations. While both children and adults were found to increase their vocal pitch following predictable and unpredictable changes to their auditory feedback, adults produced larger compensatory responses. The results of the current study demonstrate that both children and adults rapidly integrate information derived from their auditory feedback to modify subsequent speech motor commands. However, these results also demonstrate that children and adults differ in their ability to use auditory feedback to generate compensatory vocal responses during ongoing vocalization. Since vocal variability also differed across the children and adult groups, these results also suggest that compensatory vocal responses to frequency-altered feedback manipulations initiated at vocalization onset may be modulated by vocal variability. Copyright © 2015 IBRO. Published by Elsevier Ltd. All

  11. Utilising reinforcement learning to develop strategies for driving auditory neural implants

    NASA Astrophysics Data System (ADS)

    Lee, Geoffrey W.; Zambetta, Fabio; Li, Xiaodong; Paolini, Antonio G.

    2016-08-01

    Objective. In this paper we propose a novel application of reinforcement learning to the area of auditory neural stimulation. We aim to develop a simulation environment which is based off real neurological responses to auditory and electrical stimulation in the cochlear nucleus (CN) and inferior colliculus (IC) of an animal model. Using this simulator we implement closed loop reinforcement learning algorithms to determine which methods are most effective at learning effective acoustic neural stimulation strategies. Approach. By recording a comprehensive set of acoustic frequency presentations and neural responses from a set of animals we created a large database of neural responses to acoustic stimulation. Extensive electrical stimulation in the CN and the recording of neural responses in the IC provides a mapping of how the auditory system responds to electrical stimuli. The combined dataset is used as the foundation for the simulator, which is used to implement and test learning algorithms. Main results. Reinforcement learning, utilising a modified n-Armed Bandit solution, is implemented to demonstrate the model’s function. We show the ability to effectively learn stimulation patterns which mimic the cochlea’s ability to covert acoustic frequencies to neural activity. Time taken to learn effective replication using neural stimulation takes less than 20 min under continuous testing. Significance. These results show the utility of reinforcement learning in the field of neural stimulation. These results can be coupled with existing sound processing technologies to develop new auditory prosthetics that are adaptable to the recipients current auditory pathway. The same process can theoretically be abstracted to other sensory and motor systems to develop similar electrical replication of neural signals.

  12. A real-time virtual auditory system for spatially dynamic perception research

    NASA Astrophysics Data System (ADS)

    Scarpaci, Jacob W.; Colburn, H. Steven

    2004-05-01

    A Real Time Virtual Auditory System (RT-VAS) is being developed to provide a high-performance, cost-effective, flexible system that can dynamically update filter coefficients in hard real time on a PC. An InterSense head tracker is incorporated to provide low-latency head tracking to allow studies with head motion. Processing is done using a real-time Linux Kernel (RTAI kernel patch) which allows for precise processor scheduling, resulting in negligible time jitter of output samples. Output is calculated at a sample rate of 44.1 kHz and displayed using a National Instruments DAQ. Object oriented approach to system development allows for customizable input, position, and calculation routines as well as multiple independent auditory objects. Input and position may be calculated in real-time or read from a file. Calculation of output may include filtering with spatially sampled HRTFs or analytic models and head movements may be recorded to file. Limitations of the system are tied to the speed of the processor, thus complexity of experiments scales with speed of computer hardware. The current system handles multiple moving sources while tracking head position. Preliminary psychoacoustic results with head motion will be shown, as well as a demonstration of the system. [Work supported by NIH DC00100.

  13. Spatial Learning and Computer Simulations in Science

    ERIC Educational Resources Information Center

    Lindgren, Robb; Schwartz, Daniel L.

    2009-01-01

    Interactive simulations are entering mainstream science education. Their effects on cognition and learning are often framed by the legacy of information processing, which emphasized amodal problem solving and conceptual organization. In contrast, this paper reviews simulations from the vantage of research on perception and spatial learning,…

  14. Spatial Learning and Computer Simulations in Science

    ERIC Educational Resources Information Center

    Lindgren, Robb; Schwartz, Daniel L.

    2009-01-01

    Interactive simulations are entering mainstream science education. Their effects on cognition and learning are often framed by the legacy of information processing, which emphasized amodal problem solving and conceptual organization. In contrast, this paper reviews simulations from the vantage of research on perception and spatial learning,…

  15. Stimulus control and auditory discrimination learning sets in the bottlenose dolphin1

    PubMed Central

    Herman, Louis M.; Arbeit, William R.

    1973-01-01

    The learning efficiency of an Atlantic bottlenose dolphin was evaluated using auditory discrimination learning-set tasks. Efficiency, as measured by the probability of a correct response on Trial 2 of a new discrete-trial, two-choice auditory discrimination problem, reached levels comparable to those attained by advanced species of nonhuman primates. Runs of errorless problems in some cases rivaled those reported for individual rhesus monkeys in visual discrimination learning-set tasks. This level of stimulus control of responses to new auditory discriminanda was attained through (a) the development of a sequential within-trial method for presentation of a pair of auditory discriminanda; (b) the extensive use of fading methods to train initial discriminations, followed by the fadeout of the use of fading; (c) the development of listening behavior through control of the animal's responses during projection of the auditory discriminanda; and (d) the use of highly discriminable auditory stimuli, by applying results of a parametric evaluation of discriminability of selected acoustic variables. Learning efficiency was tested using a cueing method on Trial 1 of each new discrimination, to allow the animal to identify the positive stimulus before its response. Efficiency was also tested with the more common blind baiting method, in which the Trial 1 response was reinforced on only a random half of the problems. Efficiency was high for both methods. The overall results were generally in keeping with exceptations of learning capacity based on the large size and high degree of cortical complexity of the brain of the bottlenose dolphin. PMID:16811670

  16. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning

    PubMed Central

    Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359

  17. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning.

    PubMed

    Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.

  18. Thalamic and parietal brain morphology predicts auditory category learning.

    PubMed

    Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas

    2014-01-01

    Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties. © 2013 Published by Elsevier Ltd.

  19. Auditory spatial attention to speech and complex non-speech sounds in children with autism spectrum disorder.

    PubMed

    Soskey, Laura N; Allen, Paul D; Bennetto, Loisa

    2017-08-01

    One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  20. Active and passive contributions to spatial learning.

    PubMed

    Chrastil, Elizabeth R; Warren, William H

    2012-02-01

    It seems intuitively obvious that active exploration of a new environment will lead to better spatial learning than will passive exposure. However, the literature on this issue is decidedly mixed-in part, because the concept itself is not well defined. We identify five potential components of active spatial learning and review the evidence regarding their role in the acquisition of landmark, route, and survey knowledge. We find that (1) idiothetic information in walking contributes to metric survey knowledge, (2) there is little evidence as yet that decision making during exploration contributes to route or survey knowledge, (3) attention to place-action associations and relevant spatial relations contributes to route and survey knowledge, although landmarks and boundaries appear to be learned without effort, (4) route and survey information are differentially encoded in subunits of working memory, and (5) there is preliminary evidence that mental manipulation of such properties facilitates spatial learning. Idiothetic information appears to be necessary to reveal the influence of attention and, possibly, decision making in survey learning, which may explain the mixed results in desktop virtual reality. Thus, there is indeed an active advantage in spatial learning, which manifests itself in the task-dependent acquisition of route and survey knowledge.

  1. Dynamic stimulation evokes spatially focused receptive fields in bat auditory cortex.

    PubMed

    Hoffmann, Susanne; Schuller, Gerd; Firzlaff, Uwe

    2010-01-01

    Bats can orient and hunt for prey in complete darkness using echolocation. Due to the pulse-like character of call emission they receive a stroboscopic view of their environment. During target approach, bats adjust their emitted echolocation calls to the specific requirements of the dynamically changing environmental and behavioral context. In addition to changes of the spectro-temporal call features, the spatial focusing of the beam of the sonar emissions onto the target is a conspicuous feature during target tracking. The neural processes underlying the complex sensory-motor interactions during target tracking are not well understood. In this study, we used a two-tone-pulse paradigm with 81 combinations of inter-aural intensity differences and six inter-pulse intervals in a passive hearing task to tackle the question of how transient changes in the azimuthal position of successive sounds are encoded by neurons in the auditory cortex of the bat Phyllostomus discolor. In a population of cortical neurons (11%, 24 of 217), spatial receptive fields were focused to a small region of frontal azimuthal positions during dynamic stimulation with tone-pulse pairs at short inter-pulse intervals. The response of these neurons might be important for the behaviorally observed locking of the sonar beam onto a selected target during the later stages of target tracking. Most interestingly, the majority of these neurons (88%, 21 of 24) were located in the posterior dorsal part of the auditory cortex. This cortical subfield might thus be specifically involved in the analysis of dynamic acoustic scenes.

  2. The impact of variation in low-frequency interaural cross correlation on auditory spatial imagery in stereophonic loudspeaker reproduction

    NASA Astrophysics Data System (ADS)

    Martens, William

    2005-04-01

    Several attributes of auditory spatial imagery associated with stereophonic sound reproduction are strongly modulated by variation in interaural cross correlation (IACC) within low frequency bands. Nonetheless, a standard practice in bass management for two-channel and multichannel loudspeaker reproduction is to mix low-frequency musical content to a single channel for reproduction via a single driver (e.g., a subwoofer). This paper reviews the results of psychoacoustic studies which support the conclusion that reproduction via multiple drivers of decorrelated low-frequency signals significantly affects such important spatial attributes as auditory source width (ASW), auditory source distance (ASD), and listener envelopment (LEV). A variety of methods have been employed in these tests, including forced choice discrimination and identification, and direct ratings of both global dissimilarity and distinct attributes. Contrary to assumptions that underlie industrial standards established in 1994 by ITU-R. Recommendation BS.775-1, these findings imply that substantial stereophonic spatial information exists within audio signals at frequencies below the 80 to 120 Hz range of prescribed subwoofer cutoff frequencies, and that loudspeaker reproduction of decorrelated signals at frequencies as low as 50 Hz can have an impact upon auditory spatial imagery. [Work supported by VRQ.

  3. Potential for using visual, auditory, and olfactory cues to manage foraging behaviour and spatial distribution of rangeland livestock

    USDA-ARS?s Scientific Manuscript database

    This paper reviews the literature and reports on the current state of knowledge regarding the potential for managers to use visual (VC), auditory (AC), and olfactory (OC) cues to manage foraging behavior and spatial distribution of rangeland livestock. We present evidence that free-ranging livestock...

  4. An electrophysiological correlate of conflict processing in an auditory spatial Stroop task: the effect of individual differences in navigational style.

    PubMed

    Buzzell, George A; Roberts, Daniel M; Baldwin, Carryl L; McDonald, Craig G

    2013-11-01

    Recent work has identified an event-related potential (ERP) component, the incongruency negativity (N(inc)), which is sensitive to auditory Stroop conflict processing. Here, we investigated how this index of conflict processing is influenced by individual differences in cognitive style. There is evidence that individuals differ in the strategy they use to navigate through the environment; some use a predominantly verbal-egocentric strategy while others rely more heavily on a spatial-allocentric strategy. In addition, navigational strategy, assessed by a way-finding questionnaire, is predictive of performance on an auditory spatial Stroop task, in which either the semantic or spatial dimension of stimuli must be ignored. To explore the influence of individual differences in navigational style on conflict processing, participants took part in an auditory spatial Stroop task while the electroencephalogram (EEG) was recorded. Whereas behavioral performance only showed a main effect of congruency, we observed the predicted three-way interaction between congruency, task type and navigational style with respect to our physiological measure of Stroop conflict. Specifically, congruency-dependent modulation of the N(inc) was observed only when participants performed their non-dominant task (e.g., verbal navigators attempting to ignore semantic information). These results confirm that the N(inc) reliably indexes auditory Stroop conflict and extend previous results by demonstrating that the N(inc) is predictably modulated by individual differences in cognitive style. © 2013.

  5. Less Is More: Latent Learning Is Maximized by Shorter Training Sessions in Auditory Perceptual Learning

    PubMed Central

    Molloy, Katharine; Moore, David R.; Sohoglu, Ediz; Amitay, Sygal

    2012-01-01

    Background The time course and outcome of perceptual learning can be affected by the length and distribution of practice, but the training regimen parameters that govern these effects have received little systematic study in the auditory domain. We asked whether there was a minimum requirement on the number of trials within a training session for learning to occur, whether there was a maximum limit beyond which additional trials became ineffective, and whether multiple training sessions provided benefit over a single session. Methodology/Principal Findings We investigated the efficacy of different regimens that varied in the distribution of practice across training sessions and in the overall amount of practice received on a frequency discrimination task. While learning was relatively robust to variations in regimen, the group with the shortest training sessions (∼8 min) had significantly faster learning in early stages of training than groups with longer sessions. In later stages, the group with the longest training sessions (>1 hr) showed slower learning than the other groups, suggesting overtraining. Between-session improvements were inversely correlated with performance; they were largest at the start of training and reduced as training progressed. In a second experiment we found no additional longer-term improvement in performance, retention, or transfer of learning for a group that trained over 4 sessions (∼4 hr in total) relative to a group that trained for a single session (∼1 hr). However, the mechanisms of learning differed; the single-session group continued to improve in the days following cessation of training, whereas the multi-session group showed no further improvement once training had ceased. Conclusions/Significance Shorter training sessions were advantageous because they allowed for more latent, between-session and post-training learning to emerge. These findings suggest that efficient regimens should use short training sessions, and

  6. Learning Auditory Discrimination with Computer-Assisted Instruction: A Comparison of Two Different Performance Objectives.

    ERIC Educational Resources Information Center

    Steinhaus, Kurt A.

    A 12-week study of two groups of 14 college freshmen music majors was conducted to determine which group demonstrated greater achievement in learning auditory discrimination using computer-assisted instruction (CAI). The method employed was a pre-/post-test experimental design using subjects randomly assigned to a control group or an experimental…

  7. Relationship Patterns between Central Auditory Processing Disorders and Language Disorders, Learning Disabilities, and Sensory Integration Dysfunction.

    ERIC Educational Resources Information Center

    Kruger, Retha J.; Kruger, Johann J.; Hugo, Rene; Campbell, Nicole G.

    2001-01-01

    A multimodal assessment of 19 children (ages 4-9) with learning disabilities was used to identify problem areas. The majority presented with deficits involving both visual and auditory modalities, as well as problems with motor abilities and concentration skills. Subgroups of problem areas were found to occur together. (Contains references.)…

  8. The Effect of Auditory Integration Training on the Working Memory of Adults with Different Learning Preferences

    ERIC Educational Resources Information Center

    Ryan, Tamara E.

    2014-01-01

    The purpose of this study was to determine the effects of auditory integration training (AIT) on a component of the executive function of working memory; specifically, to determine if learning preferences might have an interaction with AIT to increase the outcome for some learners. The question asked by this quantitative pretest posttest design is…

  9. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    ERIC Educational Resources Information Center

    Casserly, Elizabeth D.; Pisoni, David B.

    2015-01-01

    Purpose: Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method: Experiments 1 (N = 37) and 2 (N =…

  10. The Effect of Auditory Integration Training on the Working Memory of Adults with Different Learning Preferences

    ERIC Educational Resources Information Center

    Ryan, Tamara E.

    2014-01-01

    The purpose of this study was to determine the effects of auditory integration training (AIT) on a component of the executive function of working memory; specifically, to determine if learning preferences might have an interaction with AIT to increase the outcome for some learners. The question asked by this quantitative pretest posttest design is…

  11. Auditory Training for Experienced and Inexperienced Second-Language Learners: Native French Speakers Learning English Vowels

    ERIC Educational Resources Information Center

    Iverson, Paul; Pinet, Melanie; Evans, Bronwen G.

    2012-01-01

    This study examined whether high-variability auditory training on natural speech can benefit experienced second-language English speakers who already are exposed to natural variability in their daily use of English. The subjects were native French speakers who had learned English in school; experienced listeners were tested in England and the less…

  12. Auditory Middle Latency Response and Phonological Awareness in Students with Learning Disabilities.

    PubMed

    Romero, Ana Carla Leite; Funayama, Carolina Araújo Rodrigues; Capellini, Simone Aparecida; Frizzo, Ana Claudia Figueiredo

    2015-10-01

    Introduction Behavioral tests of auditory processing have been applied in schools and highlight the association between phonological awareness abilities and auditory processing, confirming that low performance on phonological awareness tests may be due to low performance on auditory processing tests. Objective To characterize the auditory middle latency response and the phonological awareness tests and to investigate correlations between responses in a group of children with learning disorders. Methods The study included 25 students with learning disabilities. Phonological awareness and auditory middle latency response were tested with electrodes placed on the left and right hemispheres. The correlation between the measurements was performed using the Spearman rank correlation coefficient. Results There is some correlation between the tests, especially between the Pa component and syllabic awareness, where moderate negative correlation is observed. Conclusion In this study, when phonological awareness subtests were performed, specifically phonemic awareness, the students showed a low score for the age group, although for the objective examination, prolonged Pa latency in the contralateral via was observed. Negative weak to moderate correlation for Pa wave latency was observed, as was positive weak correlation for Na-Pa amplitude.

  13. Auditory Middle Latency Response and Phonological Awareness in Students with Learning Disabilities

    PubMed Central

    Romero, Ana Carla Leite; Funayama, Carolina Araújo Rodrigues; Capellini, Simone Aparecida; Frizzo, Ana Claudia Figueiredo

    2015-01-01

    Introduction Behavioral tests of auditory processing have been applied in schools and highlight the association between phonological awareness abilities and auditory processing, confirming that low performance on phonological awareness tests may be due to low performance on auditory processing tests. Objective To characterize the auditory middle latency response and the phonological awareness tests and to investigate correlations between responses in a group of children with learning disorders. Methods The study included 25 students with learning disabilities. Phonological awareness and auditory middle latency response were tested with electrodes placed on the left and right hemispheres. The correlation between the measurements was performed using the Spearman rank correlation coefficient. Results There is some correlation between the tests, especially between the Pa component and syllabic awareness, where moderate negative correlation is observed. Conclusion In this study, when phonological awareness subtests were performed, specifically phonemic awareness, the students showed a low score for the age group, although for the objective examination, prolonged Pa latency in the contralateral via was observed. Negative weak to moderate correlation for Pa wave latency was observed, as was positive weak correlation for Na-Pa amplitude. PMID:26491479

  14. Auditory evoked responses and learning and awareness during general anesthesia.

    PubMed

    Ghoneim, M M; Block, R I; Dhanaraj, V J; Todd, M M; Choi, W W; Brown, C K

    2000-02-01

    There is a major distinction between conscious and unconscious learning. Monitoring the mid-latency auditory evoked responses (AER) has been proposed as a measure to ascertain the adequacy of the hypnotic state during surgery. In the present study, we investigated the presence of explicit and implicit memories after anesthesia and examined the relationships of such memories to the AER. We studied 180 patients scheduled for elective surgical procedures. After a thiopental induction, one of four anesthetics were studied: Opioid bolus: 7.5 microg x kg(-1) fentanyl, 70% N2O, with 2.5 microg x kg(-1) supplements as needed (n=100); Opioid infusion: Alfentanil 50 microg x kg(-1) bolus, 1-1.5 microg x kg(-1) x min(-1) infusion, 70% N2O (n=40); Isoflurane 0.3%: Fentanyl 1 microg x kg(-1), 70% N2O, isoflurane 0.3% expired (n=16); Isoflurane 0.7%: Fentanyl 1 microg x kg(-1), 70% N2O, isoflurane 0.7% expired (n=23). AER were recorded before anesthesia, 5 min after surgical incision and then every 30 min until the end of surgery. A tape of either the story of the "Three Little Pigs" or the "Wizard of Oz" was played continuously between the recordings. Explicit memory was assessed postoperatively by tests of recall and recognition, and implicit memory was assessed by the frequency of story-related free associations to target words from the stories, which were solicited twice during a structured interview. Six patients showed explicit recall of intraoperative events: All received the opioid bolus regimen. About 7% of patients reported dreaming during anesthesia. The incidence of picking the correct story that had been presented during anesthesia averaged 49%, i.e., very close to chance level. Overall, priming occurred only at the second association tests for the opioid bolus regimen, for which the frequency of an association to the presented story among those not giving an association to the control story was 26%, which was double the frequency (13%) of an association to the

  15. Spatial learning and goldfish telencephalon NMDA receptors.

    PubMed

    Gómez, Yolanda; Vargas, Juan Pedro; Portavella, Manuel; López, Juan Carlos

    2006-05-01

    Recent results have demonstrated that the mammalian hippocampus and the dorso-lateral telencephalon of ray-finned fishes share functional similarities in relation to spatial memory systems. In the present study, we investigated whether the physiological mechanisms of this hippocampus-dependent spatial memory system were also similar in mammals and ray-finned fishes, and therefore possibly conserved through evolution in vertebrates. In Experiment 1, we studied the effects of the intracranial administration of the noncompetitive NMDA receptor antagonist MK-801 during the acquisition of a spatial task. The results indicated dose-dependent drug-induced impairment of spatial memory. Experiment 2 evaluated if the MK-801 produced disruption of retrieval of a learned spatial response. Data showed that the administration of MK-801 did not impair the retrieval of the information previously stored. The last experiment analyzed the involvement of the telencephalic NMDA receptors in a spatial and in a cue task. Results showed a clear impairment in spatial learning but not in cue learning when NMDA receptors were blocked. As a whole, these results indicate that physiological mechanisms of this hippocampus-dependent system could be a general feature in vertebrate, and therefore phylogenetically conserved.

  16. Early continuous white noise exposure alters auditory spatial sensitivity and expression of GAD65 and GABAA receptor subunits in rat auditory cortex.

    PubMed

    Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde

    2010-04-01

    Sensory experiences have important roles in the functional development of the mammalian auditory cortex. Here, we show how early continuous noise rearing influences spatial sensitivity in the rat primary auditory cortex (A1) and its underlying mechanisms. By rearing infant rat pups under conditions of continuous, moderate level white noise, we found that noise rearing markedly attenuated the spatial sensitivity of A1 neurons. Compared with rats reared under normal conditions, spike counts of A1 neurons were more poorly modulated by changes in stimulus location, and their preferred locations were distributed over a larger area. We further show that early continuous noise rearing induced significant decreases in glutamic acid decarboxylase 65 and gamma-aminobutyric acid (GABA)(A) receptor alpha1 subunit expression, and an increase in GABA(A) receptor alpha3 expression, which indicates a returned to the juvenile form of GABA(A) receptor, with no effect on the expression of N-methyl-D-aspartate receptors. These observations indicate that noise rearing has powerful adverse effects on the maturation of cortical GABAergic inhibition, which might be responsible for the reduced spatial sensitivity.

  17. Extreme Learning Machines for spatial environmental data

    NASA Astrophysics Data System (ADS)

    Leuenberger, Michael; Kanevski, Mikhail

    2015-12-01

    The use of machine learning algorithms has increased in a wide variety of domains (from finance to biocomputing and astronomy), and nowadays has a significant impact on the geoscience community. In most real cases geoscience data modelling problems are multivariate, high dimensional, variable at several spatial scales, and are generated by non-linear processes. For such complex data, the spatial prediction of continuous (or categorical) variables is a challenging task. The aim of this paper is to investigate the potential of the recently developed Extreme Learning Machine (ELM) for environmental data analysis, modelling and spatial prediction purposes. An important contribution of this study deals with an application of a generic self-consistent methodology for environmental data driven modelling based on Extreme Learning Machine. Both real and simulated data are used to demonstrate applicability of ELM at different stages of the study to understand and justify the results.

  18. Physical exercise, neuroplasticity, spatial learning and memory.

    PubMed

    Cassilhas, Ricardo C; Tufik, Sergio; de Mello, Marco Túlio

    2016-03-01

    There has long been discussion regarding the positive effects of physical exercise on brain activity. However, physical exercise has only recently begun to receive the attention of the scientific community, with major interest in its effects on the cognitive functions, spatial learning and memory, as a non-drug method of maintaining brain health and treating neurodegenerative and/or psychiatric conditions. In humans, several studies have shown the beneficial effects of aerobic and resistance exercises in adult and geriatric populations. More recently, studies employing animal models have attempted to elucidate the mechanisms underlying neuroplasticity related to physical exercise-induced spatial learning and memory improvement, even under neurodegenerative conditions. In an attempt to clarify these issues, the present review aims to discuss the role of physical exercise in the improvement of spatial learning and memory and the cellular and molecular mechanisms involved in neuroplasticity.

  19. Musical metaphors: evidence for a spatial grounding of non-literal sentences describing auditory events.

    PubMed

    Wolter, Sibylla; Dudschig, Carolin; de la Vega, Irmgard; Kaup, Barbara

    2015-03-01

    This study investigated whether the spatial terms high and low, when used in sentence contexts implying a non-literal interpretation, trigger similar spatial associations as would have been expected from the literal meaning of the words. In three experiments, participants read sentences describing either a high or a low auditory event (e.g., The soprano sings a high aria vs. The pianist plays a low note). In all Experiments, participants were asked to judge (yes/no) whether the sentences were meaningful by means of up/down (Experiments 1 and 2) or left/right (Experiment 3) key press responses. Contrary to previous studies reporting that metaphorical language understanding differs from literal language understanding with regard to simulation effects, the results show compatibility effects between sentence implied pitch height and response location. The results are in line with grounded models of language comprehension proposing that sensory motor experiences are being elicited when processing literal as well as non-literal sentences. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Spatial profile of excitatory and inhibitory synaptic connectivity in mouse primary auditory cortex

    PubMed Central

    Levy, Robert B.; Reyes, Alex D.

    2012-01-01

    The role of local cortical activity in shaping neuronal responses is controversial. Among other questions, it is unknown how the diverse response patterns reported in vivo - lateral inhibition in some cases, approximately balanced excitation and inhibition (co-tuning) in others - compare to the local spread of synaptic connectivity. Excitatory and inhibitory activity might cancel each other out, or, if one outweighs the other, receptive field (RF) properties might be substantially affected. As a step toward addressing this question, we used multiple intracellular recording in mouse primary auditory cortical slices to map synaptic connectivity among excitatory pyramidal (P) cells and the two broad classes of inhibitory cells, fast-spiking (FS) and non-FS cells in the principal input layer. Connection probability was distance-dependent; the spread of connectivity, parameterized by Gaussian fits to the data, was comparable for all cell types, ranging from 85 to 114 μm. With brief stimulus trains, unitary synapses formed by FS interneurons were stronger than other classes of synapses; synapse strength did not correlate with distance between cells. The physiological data were qualitatively consistent with predictions derived from anatomical reconstruction. We also analyzed the truncation of neuronal processes due to slicing; overall connectivity was reduced but the spatial pattern was unaffected. The comparable spatial patterns of connectivity and relatively strong excitatory-inhibitory interconnectivity are consistent with a theoretical model where either lateral inhibition or co-tuning can predominate, depending on the structure of the input. PMID:22514322

  1. ERP Indications for Sustained and Transient Auditory Spatial Attention with Different Lateralization Cues

    NASA Astrophysics Data System (ADS)

    Widmann, Andreas; Schröger, Erich

    The presented study was designed to investigate ERP effects of auditory spatial attention in sustained attention condition (where the to-be-attended location is defined in a blockwise manner) and in a transient attention condition (where the to-be-attended location is defined in a trial-by-trial manner). Lateralization in the azimuth plane was manipulated (a) via monaural presentation of l- and right-ear sounds, (b) via interaural intensity differences, (c) via interaural time differences, (d) via an artificial-head recording, and (e) via free-field stimulation. Ten participants were delivered with frequent Nogo- and infrequent Go-Stimuli. In one half of the experiment participants were instructed to press a button if they detected a Go-stimulus at a predefined side (sustained attention), in the other half they were required to detect Go-stimuli following an arrow-cue at the cued side (transient attention). Results revealed negative differences (Nd) between ERPs elicited by to-be-attended and to-be-ignored sounds in all conditions. These Nd-effects were larger for the sustained than for the transient attention condition indicating that attentional selection according to spatial criteria is improved when subjects can focus to one and the same location for a series of stimuli.

  2. Differential signaling to subplate neurons by spatially specific silent synapses in developing auditory cortex.

    PubMed

    Meng, Xiangying; Kao, Joseph P Y; Kanold, Patrick O

    2014-06-25

    Subplate neurons (SPNs) form one of the earliest maturing circuits in the cerebral cortex and are crucial to cortical development. In addition to thalamic inputs, subsets of SPNs receive excitatory AMPAR-mediated inputs from the developing cortical plate in the second postnatal week. Functionally silent (non-AMPAR-mediated) excitatory synapses exist in several systems during development, and the existence of such inputs can precede the appearance of AMPAR-mediated synapses. Because SPNs receive inputs from presynaptic cells in different cortical layers, we investigated whether AMPAR-mediated and silent synapses might originate in different layers. We used laser-scanning photostimulation in acute thalamocortical slices of mouse auditory cortex during the first 2 postnatal weeks to study the spatial origin of silent synapses onto SPNs. We find that silent synapses from the cortical plate are present on SPNs and that they originate from different cortical locations than functional (AMPAR-mediated) synapses. Moreover, we find that SPNs can be categorized based on the spatial pattern of silent and AMPAR-mediated connections. Because SPNs can be activated at young ages by thalamic inputs, distinct populations of cortical neurons at young ages have the ability to signal to SPNs depending on the activation state of SPNs. Because during development intracortical circuits are spontaneously active, our results suggest that SPNs might integrate ascending input from the thalamus with spontaneously generated cortical activity patterns. Together, our results suggest that SPNs are an integral part of the developing intracortical circuitry and thereby can sculpt thalamocortical connections.

  3. A disinhibitory microcircuit for associative fear learning in the auditory cortex.

    PubMed

    Letzkus, Johannes J; Wolff, Steffen B E; Meyer, Elisabeth M M; Tovote, Philip; Courtin, Julien; Herry, Cyril; Lüthi, Andreas

    2011-12-07

    Learning causes a change in how information is processed by neuronal circuits. Whereas synaptic plasticity, an important cellular mechanism, has been studied in great detail, we know much less about how learning is implemented at the level of neuronal circuits and, in particular, how interactions between distinct types of neurons within local networks contribute to the process of learning. Here we show that acquisition of associative fear memories depends on the recruitment of a disinhibitory microcircuit in the mouse auditory cortex. Fear-conditioning-associated disinhibition in auditory cortex is driven by foot-shock-mediated cholinergic activation of layer 1 interneurons, in turn generating inhibition of layer 2/3 parvalbumin-positive interneurons. Importantly, pharmacological or optogenetic block of pyramidal neuron disinhibition abolishes fear learning. Together, these data demonstrate that stimulus convergence in the auditory cortex is necessary for associative fear learning to complex tones, define the circuit elements mediating this convergence and suggest that layer-1-mediated disinhibition is an important mechanism underlying learning and information processing in neocortical circuits.

  4. Effects of hand gestures on auditory learning of second-language vowel length contrasts.

    PubMed

    Hirata, Yukari; Kelly, Spencer D; Huang, Jessica; Manansala, Michael

    2014-12-01

    Research has shown that hand gestures affect comprehension and production of speech at semantic, syntactic, and pragmatic levels for both native language and second language (L2). This study investigated a relatively less explored question: Do hand gestures influence auditory learning of an L2 at the segmental phonology level? To examine auditory learning of phonemic vowel length contrasts in Japanese, 88 native English-speaking participants took an auditory test before and after one of the following 4 types of training in which they (a) observed an instructor in a video speaking Japanese words while she made syllabic-rhythm hand gesture, (b) produced this gesture with the instructor, (c) observed the instructor speaking those words and her moraic-rhythm hand gesture, or (d) produced the moraic-rhythm gesture with the instructor. All of the training types yielded similar auditory improvement in identifying vowel length contrast. However, observing the syllabic-rhythm hand gesture yielded the most balanced improvement between word-initial and word-final vowels and between slow and fast speaking rates. The overall effect of hand gesture on learning of segmental phonology is limited. Implications for theories of hand gesture are discussed in terms of the role it plays at different linguistic levels.

  5. Computer-Based Auditory Training (CBAT): Benefits for Children with Language- and Reading-Related Learning Difficulties

    ERIC Educational Resources Information Center

    Loo, Jenny Hooi Yin; Bamiou, Doris-Eva; Campbell, Nicci; Luxon, Linda M.

    2010-01-01

    This article reviews the evidence for computer-based auditory training (CBAT) in children with language, reading, and related learning difficulties, and evaluates the extent it can benefit children with auditory processing disorder (APD). Searches were confined to studies published between 2000 and 2008, and they are rated according to the level…

  6. Comparison of Auditory/Visual and Visual/Motor Practice on the Spelling Accuracy of Learning Disabled Children.

    ERIC Educational Resources Information Center

    Aleman, Cheryl; And Others

    1990-01-01

    Compares auditory/visual practice to visual/motor practice in spelling with seven elementary school learning-disabled students enrolled in a resource room setting. Finds that the auditory/visual practice was superior to the visual/motor practice on the weekly spelling performance for all seven students. (MG)

  7. Encoding, learning, and spatial updating of multiple object locations specified by 3-D sound, spatial language, and vision.

    PubMed

    Klatzky, Roberta L; Lippa, Yvonne; Loomis, Jack M; Golledge, Reginald G

    2003-03-01

    Participants standing at an origin learned the distance and azimuth of target objects that were specified by 3-D sound, spatial language, or vision. We tested whether the ensuing target representations functioned equivalently across modalities for purposes of spatial updating. In experiment 1, participants localized targets by pointing to each and verbalizing its distance, both directly from the origin and at an indirect waypoint. In experiment 2, participants localized targets by walking to each directly from the origin and via an indirect waypoint. Spatial updating bias was estimated by the spatial-coordinate difference between indirect and direct localization; noise from updating was estimated by the difference in variability of localization. Learning rate and noise favored vision over the two auditory modalities. For all modalities, bias during updating tended to move targets forward, comparably so for three and five targets and for forward and rightward indirect-walking directions. Spatial language produced additional updating bias and noise from updating. Although spatial representations formed from language afford updating, they do not function entirely equivalently to those from intrinsically spatial modalities.

  8. Anatomical substrates of visual and auditory miniature second-language learning.

    PubMed

    Newman-Norlund, Roger D; Frey, Scott H; Petitto, Laura-Ann; Grafton, Scott T

    2006-12-01

    Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca's area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca's area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the "critical period." The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance.

  9. Temporal Dynamics in Auditory Perceptual Learning: Impact of Sequencing and Incidental Learning

    PubMed Central

    Church, Barbara A.; Mercado, Eduardo; Wisniewski, Matthew G.; Liu, Estella H.

    2013-01-01

    Training can improve perceptual sensitivities. We examined whether the temporal dynamics and incidental versus intentional nature of training are important. Within the context of a birdsong rate discrimination task, we examined whether the sequencing of pre-testing exposure to the stimuli mattered. Easy-to-hard (progressive) sequencing of stimuli during pre-exposure led to more accurate performance with the critical difficult contrast and greater generalization to new contrasts in the task, compared to equally variable training in either a random or anti-progressive order. This greater accuracy was also evident when participants experienced the progressively-sequenced stimuli in a different incidental learning task that did not involve direct auditory training. The results clearly show the importance of temporal dynamics (sequencing) in learning, and that the progressive training advantages cannot be fully explained by direct associations between stimulus features and the corresponding responses. The current findings are consistent with a hierarchical account of perceptual learning among other possibilities, but not with explanations that focus on stimulus variability. PMID:22642235

  10. Role of cortical neurodynamics for understanding the neural basis of motivated behavior - lessons from auditory category learning.

    PubMed

    Ohl, Frank W

    2015-04-01

    Rhythmic activity appears in the auditory cortex in both microscopic and macroscopic observables and is modulated by both bottom-up and top-down processes. How this activity serves both types of processes is largely unknown. Here we review studies that have recently improved our understanding of potential functional roles of large-scale global dynamic activity patterns in auditory cortex. The experimental paradigm of auditory category learning allowed critical testing of the hypothesis that global auditory cortical activity states are associated with endogenous cognitive states mediating the meaning associated with an acoustic stimulus rather than with activity states that merely represent the stimulus for further processing.

  11. Mapping phonological information from auditory to written modality during foreign vocabulary learning.

    PubMed

    Kaushanskaya, Margarita; Marian, Viorica

    2008-12-01

    Learning to read in a foreign language often entails recognizing the printed form of words learned by sound. In the current study, the ability to map novel phonological information from the auditory modality onto the written modality was examined at different levels of overlap between the native language and an artificially constructed foreign language. In this study, monolingual English-speaking adults learned novel foreign words in the auditory modality. Recognition testing was first conducted in the auditory modality and then in the written modality. Participants who learned foreign words that matched English phonology showed similar accuracy rates when tested in either modality. Participants who learned foreign words that mismatched English phonology showed decreased recognition accuracy when tested in the written modality. Results indicate that cross-linguistic matching in phonology facilitated mapping of phonological information to the written modality. In addition, at different levels of cross-linguistic overlap, specific cognitive skills were found to correlate with the ability to map phonological information across modalities. This finding suggests that the cognitive skills required for acquisition of a foreign language may vary depending upon degree of cross-linguistic similarity.

  12. Learning on auditory discrimination tasks in normal-hearing listeners: Implications for hearing rehabilitation

    NASA Astrophysics Data System (ADS)

    Wright, Beverly A.

    2005-04-01

    Hearing rehabilitation extends beyond simply fitting a hearing aid or cochlear implant. To improve the benefit of these devices, it must be established which auditory abilities can be improved with training. Toward this end, learning in normal-hearing listeners was examined on five auditory discrimination tasks: frequency, intensity, interaural-time-difference (ITD), interaural-level-difference (ILD), and duration. Because the same training regimen was used throughout, any differences in the learning patterns across these trained discriminations likely reflect differences in the plasticity of the underlying mechanisms, at least for that regimen. The influence of training was assessed by comparing the improvements in discrimination threshold on trained and untrained conditions between listeners who were given multiple-hour practice on a single discrimination condition and those who were not. Learning on the five tasks followed one of two general patterns. For ITD and intensity discrimination, multiple-hour practice did not lead to greater learning than that seen in untrained listeners. In contrast, for ILD, duration, and frequency discrimination, such practice yielded greater learning, but only on a subset of conditions. The differences in the plasticity across these auditory tasks in normal-hearing listeners imply that cochlear-implant users may benefit more from training on some tasks than others. [Work supported by NIH.

  13. Sequence learning modulates neural responses and oscillatory coupling in human and monkey auditory cortex

    PubMed Central

    Attaheri, Adam; Wilson, Benjamin; Rhone, Ariane E.; Nourski, Kirill V.; Gander, Phillip E.; Kovach, Christopher K.; Kawasaki, Hiroto; Griffiths, Timothy D.; Howard, Matthew A.; Petkov, Christopher I.

    2017-01-01

    Learning complex ordering relationships between sensory events in a sequence is fundamental for animal perception and human communication. While it is known that rhythmic sensory events can entrain brain oscillations at different frequencies, how learning and prior experience with sequencing relationships affect neocortical oscillations and neuronal responses is poorly understood. We used an implicit sequence learning paradigm (an “artificial grammar”) in which humans and monkeys were exposed to sequences of nonsense words with regularities in the ordering relationships between the words. We then recorded neural responses directly from the auditory cortex in both species in response to novel legal sequences or ones violating specific ordering relationships. Neural oscillations in both monkeys and humans in response to the nonsense word sequences show strikingly similar hierarchically nested low-frequency phase and high-gamma amplitude coupling, establishing this form of oscillatory coupling—previously associated with speech processing in the human auditory cortex—as an evolutionarily conserved biological process. Moreover, learned ordering relationships modulate the observed form of neural oscillatory coupling in both species, with temporally distinct neural oscillatory effects that appear to coordinate neuronal responses in the monkeys. This study identifies the conserved auditory cortical neural signatures involved in monitoring learned sequencing operations, evident as modulations of transient coupling and neuronal responses to temporally structured sensory input. PMID:28441393

  14. Spatial Reference Frame of Incidentally Learned Attention

    ERIC Educational Resources Information Center

    Jiang, Yuhong V.; Swallow, Khena M.

    2013-01-01

    Visual attention prioritizes information presented at particular spatial locations. These locations can be defined in reference frames centered on the environment or on the viewer. This study investigates whether incidentally learned attention uses a viewer-centered or environment-centered reference frame. Participants conducted visual search on a…

  15. Auditory tuning for spatial cues in the barn owl basal ganglia.

    PubMed

    Cohen, Y E; Knudsen, E I

    1994-07-01

    1. The basal ganglia are known to contribute to spatially guided behavior. In this study, we investigated the auditory response properties of neurons in the barn owl paleostriatum augmentum (PA), the homologue of the mammalian striatum. The data suggest that the barn owl PA is specialized to process spatial cues and, like the mammalian striatum, is involved in spatial behavior. 2. Single- and multiunit sites were recorded extracellularly in ketamine-anesthetized owls. Spatial receptive fields were measured with a free-field sound source, and tuning for frequency and interaural differences in timing (ITD) and level (ILD) was assessed using digitally synthesized dichotic stimuli. 3. Spatial receptive fields measured at nine multiunit sites were tuned to restricted regions of space: tuning widths at half-maximum response averaged 22 +/- 9.6 degrees (mean +/- SD) in azimuth and 54 +/- 22 degrees in elevation. 4. PA sites responded strongly to broadband sounds. When frequency tuning could be measured (n = 145/201 sites), tuning was broad, averaging 2.7 kHz at half-maximum response, and tended to be centered near the high end of the owl's audible range. The mean best frequency was 6.2 kHz. 5. All PA sites (n = 201) were selective for both ITD and ILD. ITD tuning curves typically exhibited a single, large "primary" peak and often smaller, "secondary" peaks at ITDs ipsilateral and/or contralateral to the primary peak. Three indices quantified the selectivity of PA sites for ITD. The first index, which was the percent difference between the minimum and maximum response as a function of ITD, averaged 100 +/- 29%. The second index, which represented the size of the largest secondary peak relative to that of the primary peak, averaged 49 +/- 23%. The third index, which was the width of the primary ITD peak at half-maximum response, averaged only 66 +/- 35 microseconds. 6. The majority (96%; n = 192/201) of PA sites were tuned to a single "best" value of ILD. The widths of ILD

  16. Effects of auditory recognition learning on the perception of vocal features in European starlings (Sturnus vulgaris).

    PubMed

    Meliza, C Daniel

    2011-11-01

    Learning to recognize complex sensory signals can change the way they are perceived. European starlings (Sturnus vulgaris) recognize other starlings by their song, which consists of a series of complex, stereotyped motifs. Song recognition learning is accompanied by plasticity in secondary auditory areas, suggesting that perceptual learning is involved. Here, to investigate whether perceptual learning can be observed behaviorally, a same-different operant task was used to measure how starlings perceived small differences in motif structure. Birds trained to recognize conspecific songs were better at detecting variations in motifs from the songs they learned, even though this variation was not directly necessary to learn the associative task. Discrimination also improved as the reference stimulus was repeated multiple times. Perception of the much larger differences between different motifs was unaffected by training. These results indicate that sensory representations of motifs are enhanced when starlings learn to recognize songs.

  17. Effects of auditory recognition learning on the perception of vocal features in European starlings (Sturnus vulgaris)

    PubMed Central

    Daniel Meliza, C.

    2011-01-01

    Learning to recognize complex sensory signals can change the way they are perceived. European starlings (Sturnus vulgaris) recognize other starlings by their song, which consists of a series of complex, stereotyped motifs. Song recognition learning is accompanied by plasticity in secondary auditory areas, suggesting that perceptual learning is involved. Here, to investigate whether perceptual learning can be observed behaviorally, a same–different operant task was used to measure how starlings perceived small differences in motif structure. Birds trained to recognize conspecific songs were better at detecting variations in motifs from the songs they learned, even though this variation was not directly necessary to learn the associative task. Discrimination also improved as the reference stimulus was repeated multiple times. Perception of the much larger differences between different motifs was unaffected by training. These results indicate that sensory representations of motifs are enhanced when starlings learn to recognize songs. PMID:22087940

  18. Effects of spatial response coding on distractor processing: evidence from auditory spatial negative priming tasks with keypress, joystick, and head movement responses.

    PubMed

    Möller, Malte; Mayr, Susanne; Buchner, Axel

    2015-01-01

    Prior studies of spatial negative priming indicate that distractor-assigned keypress responses are inhibited as part of visual, but not auditory, processing. However, recent evidence suggests that static keypress responses are not directly activated by spatially presented sounds and, therefore, might not call for an inhibitory process. In order to investigate the role of response inhibition in auditory processing, we used spatially directed responses that have been shown to result in direct response activation to irrelevant sounds. Participants localized a target sound by performing manual joystick responses (Experiment 1) or head movements (Experiment 2B) while ignoring a concurrent distractor sound. Relations between prime distractor and probe target were systematically manipulated (repeated vs. changed) with respect to identity and location. Experiment 2A investigated the influence of distractor sounds on spatial parameters of head movements toward target locations and showed that distractor-assigned responses are immediately inhibited to prevent false responding in the ongoing trial. Interestingly, performance in Experiments 1 and 2B was not generally impaired when the probe target appeared at the location of the former prime distractor and required a previously withheld and presumably inhibited response. Instead, performance was impaired only when prime distractor and probe target mismatched in terms of location or identity, which fully conforms to the feature-mismatching hypothesis. Together, the results suggest that response inhibition operates in auditory processing when response activation is provided but is presumably too short-lived to affect responding on the subsequent trial.

  19. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain.

    PubMed

    Lu, Kai; Vicario, David S

    2014-10-07

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.

  20. Generalized learning of visual-to-auditory substitution in sighted individuals.

    PubMed

    Kim, Jung-Kyong; Zatorre, Robert J

    2008-11-25

    Visual-to-auditory substitution involves delivering information about the visual world using auditory input. Although the potential suitability of sound as visual substitution has previously been demonstrated, the basic mechanism behind crossmodal learning is largely unknown; particularly, the degree to which learning generalizes to new stimuli has not been formally tested. We examined learning processes involving the use of the image-to-sound conversion system developed by Meijer [Meijer, P., 1992. An experimental system for auditory image representations. IEEE Trans Biom Eng. 39 (2), 112-121.] that codes visual vertical and horizontal axes into frequency and time representations, respectively. Two behavioral experiments provided training to sighted individuals in a controlled environment. The first experiment explored the early learning stage, comparing performance of individuals who received short-term training and those who were only explicitly given the conversion rules. Both groups performed above chance, suggesting an intuitive understanding of the image-sound relationship; the lack of group difference indicates that this intuition could be acquired simply on the basis of explicit knowledge. The second experiment involved training over a three-week period using a larger variety of stimuli. Performance on both previously trained and novel items was examined over time. Performance on the familiar items was higher than on the novel items, but performance on the latter improved over time. While the lack of improvement with the familiar items suggests memory-based performance, the improvement with novel items demonstrated generalized learning, indicating abstraction of the conversion rules such that they could be applied to interpret auditory patterns coding new visual information. Such generalization could provide a basis for the substitution in a constantly changing visual environment.

  1. A role for descending auditory cortical projections in songbird vocal learning

    PubMed Central

    Mandelblat-Cerf, Yael; Las, Liora; Denisenko, Natalia; Fee, Michale S

    2014-01-01

    Many learned motor behaviors are acquired by comparing ongoing behavior with an internal representation of correct performance, rather than using an explicit external reward. For example, juvenile songbirds learn to sing by comparing their song with the memory of a tutor song. At present, the brain regions subserving song evaluation are not known. In this study, we report several findings suggesting that song evaluation involves an avian 'cortical' area previously shown to project to the dopaminergic midbrain and other downstream targets. We find that this ventral portion of the intermediate arcopallium (AIV) receives inputs from auditory cortical areas, and that lesions of AIV result in significant deficits in vocal learning. Additionally, AIV neurons exhibit fast responses to disruptive auditory feedback presented during singing, but not during nonsinging periods. Our findings suggest that auditory cortical areas may guide learning by transmitting song evaluation signals to the dopaminergic midbrain and/or other subcortical targets. DOI: http://dx.doi.org/10.7554/eLife.02152.001 PMID:24935934

  2. Test performance and classification statistics for the Rey Auditory Verbal Learning Test in selected clinical samples.

    PubMed

    Schoenberg, Mike R; Dawson, Kyra A; Duff, Kevin; Patton, Doyle; Scott, James G; Adams, Russell L

    2006-10-01

    The Rey Auditory Verbal Learning Test [RAVLT; Rey, A. (1941). L'examen psychologique dans les cas d'encéphalopathie traumatique. Archives de Psychologie, 28, 21] is a commonly used neuropsychological measure that assesses verbal learning and memory. Normative data have been compiled [Schmidt, M. (1996). Rey Auditory and Verbal Learning Test: A handbook. Los Angeles, CA: Western Psychological Services]. When assessing an individual suspected of neurological dysfunction, useful comparisons include the extent that the patient deviates from healthy peers and also how closely the subject's performance matches those with known brain injury. This study provides the means and S.D.'s of 392 individuals with documented neurological dysfunction [closed head TBI (n=68), neoplasms (n=57), stroke (n=47), Dementia of the Alzheimer's type (n=158), and presurgical epilepsy left seizure focus (n=28), presurgical epilepsy right seizure focus (n=34)] and 122 patients with no known neurological dysfunction and psychiatric complaints. Patients were stratified into three age groups, 16-35, 36-59, and 60-88. Data were provided for trials I-V, List B, immediate recall, 30-min delayed recall, and recognition. Classification characteristics of the RAVLT using [Schmidt, M. (1996). Rey Auditory and Verbal Learning Test: A handbook. Los Angeles, CA: Western Psychological Services] meta-norms found the RAVLT to best distinguish patients suspected of Alzheimer's disease from the psychiatric comparison group.

  3. Generalization of Auditory Sensory and Cognitive Learning in Typically Developing Children

    PubMed Central

    Murphy, Cristina F. B.; Moore, David R.; Schochat, Eliane

    2015-01-01

    Despite the well-established involvement of both sensory (“bottom-up”) and cognitive (“top-down”) processes in literacy, the extent to which auditory or cognitive (memory or attention) learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported “far-transfer” to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG), memory group (MG), auditory sensory group (SG), placebo group (PG; drawing, painting), and a control, untrained group (CG). Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest), most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention) training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span) within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness), as the PG and CG improved as much as the other trained groups

  4. Generalization of Auditory Sensory and Cognitive Learning in Typically Developing Children.

    PubMed

    Murphy, Cristina F B; Moore, David R; Schochat, Eliane

    2015-01-01

    Despite the well-established involvement of both sensory ("bottom-up") and cognitive ("top-down") processes in literacy, the extent to which auditory or cognitive (memory or attention) learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported "far-transfer" to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG), memory group (MG), auditory sensory group (SG), placebo group (PG; drawing, painting), and a control, untrained group (CG). Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest), most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention) training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span) within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness), as the PG and CG improved as much as the other trained groups. Further research

  5. Can basic auditory and cognitive measures predict hearing-impaired listeners' localization and spatial speech recognition abilities?

    PubMed

    Neher, Tobias; Laugesen, Søren; Jensen, Niels Søgaard; Kragelund, Louise

    2011-09-01

    This study aimed to clarify the basic auditory and cognitive processes that affect listeners' performance on two spatial listening tasks: sound localization and speech recognition in spatially complex, multi-talker situations. Twenty-three elderly listeners with mild-to-moderate sensorineural hearing impairments were tested on the two spatial listening tasks, a measure of monaural spectral ripple discrimination, a measure of binaural temporal fine structure (TFS) sensitivity, and two (visual) cognitive measures indexing working memory and attention. All auditory test stimuli were spectrally shaped to restore (partial) audibility for each listener on each listening task. Eight younger normal-hearing listeners served as a control group. Data analyses revealed that the chosen auditory and cognitive measures could predict neither sound localization accuracy nor speech recognition when the target and maskers were separated along the front-back dimension. When the competing talkers were separated along the left-right dimension, however, speech recognition performance was significantly correlated with the attentional measure. Furthermore, supplementary analyses indicated additional effects of binaural TFS sensitivity and average low-frequency hearing thresholds. Altogether, these results are in support of the notion that both bottom-up and top-down deficits are responsible for the impaired functioning of elderly hearing-impaired listeners in cocktail party-like situations. © 2011 Acoustical Society of America

  6. Increased Signal Complexity Improves the Breadth of Generalization in Auditory Perceptual Learning

    PubMed Central

    Brown, David J.; Proulx, Michael J.

    2013-01-01

    Perceptual learning can be specific to a trained stimulus or optimally generalized to novel stimuli with the breadth of generalization being imperative for how we structure perceptual training programs. Adapting an established auditory interval discrimination paradigm to utilise complex signals, we trained human adults on a standard interval for either 2, 4, or 10 days. We then tested the standard, alternate frequency, interval, and stereo input conditions to evaluate the rapidity of specific learning and breadth of generalization over the time course. In comparison with previous research using simple stimuli, the speed of perceptual learning and breadth of generalization were more rapid and greater in magnitude, including novel generalization to an alternate temporal interval within stimulus type. We also investigated the long term maintenance of learning and found that specific and generalized learning was maintained over 3 and 6 months. We discuss these findings regarding stimulus complexity in perceptual learning and how they can inform the development of effective training protocols. PMID:24349800

  7. Feedback valence affects auditory perceptual learning independently of feedback probability.

    PubMed

    Amitay, Sygal; Moore, David R; Molloy, Katharine; Halliday, Lorna F

    2015-01-01

    Previous studies have suggested that negative feedback is more effective in driving learning than positive feedback. We investigated the effect on learning of providing varying amounts of negative and positive feedback while listeners attempted to discriminate between three identical tones; an impossible task that nevertheless produces robust learning. Four feedback conditions were compared during training: 90% positive feedback or 10% negative feedback informed the participants that they were doing equally well, while 10% positive or 90% negative feedback informed them they were doing equally badly. In all conditions the feedback was random in relation to the listeners' responses (because the task was to discriminate three identical tones), yet both the valence (negative vs. positive) and the probability of feedback (10% vs. 90%) affected learning. Feedback that informed listeners they were doing badly resulted in better post-training performance than feedback that informed them they were doing well, independent of valence. In addition, positive feedback during training resulted in better post-training performance than negative feedback, but only positive feedback indicating listeners were doing badly on the task resulted in learning. As we have previously speculated, feedback that better reflected the difficulty of the task was more effective in driving learning than feedback that suggested performance was better than it should have been given perceived task difficulty. But contrary to expectations, positive feedback was more effective than negative feedback in driving learning. Feedback thus had two separable effects on learning: feedback valence affected motivation on a subjectively difficult task, and learning occurred only when feedback probability reflected the subjective difficulty. To optimize learning, training programs need to take into consideration both feedback valence and probability.

  8. Feedback Valence Affects Auditory Perceptual Learning Independently of Feedback Probability

    PubMed Central

    Amitay, Sygal; Moore, David R.; Molloy, Katharine; Halliday, Lorna F.

    2015-01-01

    Previous studies have suggested that negative feedback is more effective in driving learning than positive feedback. We investigated the effect on learning of providing varying amounts of negative and positive feedback while listeners attempted to discriminate between three identical tones; an impossible task that nevertheless produces robust learning. Four feedback conditions were compared during training: 90% positive feedback or 10% negative feedback informed the participants that they were doing equally well, while 10% positive or 90% negative feedback informed them they were doing equally badly. In all conditions the feedback was random in relation to the listeners’ responses (because the task was to discriminate three identical tones), yet both the valence (negative vs. positive) and the probability of feedback (10% vs. 90%) affected learning. Feedback that informed listeners they were doing badly resulted in better post-training performance than feedback that informed them they were doing well, independent of valence. In addition, positive feedback during training resulted in better post-training performance than negative feedback, but only positive feedback indicating listeners were doing badly on the task resulted in learning. As we have previously speculated, feedback that better reflected the difficulty of the task was more effective in driving learning than feedback that suggested performance was better than it should have been given perceived task difficulty. But contrary to expectations, positive feedback was more effective than negative feedback in driving learning. Feedback thus had two separable effects on learning: feedback valence affected motivation on a subjectively difficult task, and learning occurred only when feedback probability reflected the subjective difficulty. To optimize learning, training programs need to take into consideration both feedback valence and probability. PMID:25946173

  9. The Rey Auditory Verbal Learning Test: normative data developed for the Venezuelan population.

    PubMed

    Ferreira Correia, Aline; Campagna Osorio, Ilva

    2014-03-01

    The Rey Auditory Verbal Learning Test (RAVLT) is a neuropsychological tool widely used to assess functions such as attention, memory, and learning ability in the auditory-verbal domain. Norms for the test have been developed in many different languages and they show different relationships with demographic variables. The main objective of this research was to develop specific norms for the Venezuelan population, with particular focus on the influences of age, education, gender, and socioeconomic status. A Spanish version of the test was administered to a quota sample of 629 healthy adults. Pearson's correlation analysis (p < .001) showed a significant association between RAVLT performance and age (r = -.401), education (r = .386), and socioeconomic status (r = -.196), but not between RAVLT performance and gender (r = -.054). Due to the strength of the correlations, only age and education were considered in the development of final norms.

  10. Developmental stress impairs performance on an association task in male and female songbirds, but impairs auditory learning in females only.

    PubMed

    Farrell, Tara M; Morgan, Amanda; MacDougall-Shackleton, Scott A

    2016-01-01

    In songbirds, early-life environments critically shape song development. Many studies have demonstrated that developmental stress impairs song learning and the development of song-control regions of the brain in males. However, song has evolved through signaller-receiver networks and the effect stress has on the ability to receive auditory signals is equally important, especially for females who use song as an indicator of mate quality. Female song preferences have been the metric used to evaluate how developmental stress affects auditory learning, but preferences are shaped by many non-cognitive factors and preclude the evaluation of auditory learning abilities in males. To determine whether developmental stress specifically affects auditory learning in both sexes, we subjected juvenile European starlings, Sturnus vulgaris, to either an ad libitum or an unpredictable food supply treatment from 35 to 115 days of age. In adulthood, we assessed learning of both auditory and visual discrimination tasks. Females reared in the experimental group were slower than females in the control group to acquire a relative frequency auditory task, and slower than their male counterparts to acquire an absolute frequency auditory task. There was no difference in auditory performance between treatment groups for males. However, on the colour association task, birds from the experimental group committed more errors per trial than control birds. There was no correlation in performance across the cognitive tasks. Developmental stress did not affect all cognitive processes equally across the sexes. Our results suggest that the male auditory system may be more robust to developmental stress than that of females.

  11. The modified Location Learning Test: norms for the assessment of spatial memory function in neuropsychological patients.

    PubMed

    Kessels, Roy P C; Nys, Gudrun M S; Brands, Augustina M A; van den Berg, Esther; Van Zandvoort, Martine J E

    2006-12-01

    This study examines the applicability of the modified Location Learning Test (mLLT) as a test of spatial memory in neuropsychological patients. Three groups of participants were examined: stroke patients, patients with diabetes mellitus and healthy participants (N=411). Three error measures were computed, the Total Score (index of overall performance), the Learning Index (the learning curve over subsequent trials) and the Delayed Recall Score, measuring decay over time. The Learning Index was the most sensitive measure, showing differences between the three groups as well as lateralization effects within the stroke group. Also, the mLLT correlated significantly with the Rey Auditory Verbal Learning Test, as well as with age and education level. Regression-based normative data were computed based on the healthy participants. In all, the mLLT appears to be a sensitive and valid test for the detection of object-location memory impairments in clinical groups.

  12. High Resolution Quantitative Synaptic Proteome Profiling of Mouse Brain Regions After Auditory Discrimination Learning

    PubMed Central

    Kolodziej, Angela; Smalla, Karl-Heinz; Richter, Sandra; Engler, Alexander; Pielot, Rainer; Dieterich, Daniela C.; Tischmeyer, Wolfgang; Naumann, Michael; Kähne, Thilo

    2016-01-01

    The molecular synaptic mechanisms underlying auditory learning and memory remain largely unknown. Here, the workflow of a proteomic study on auditory discrimination learning in mice is described. In this learning paradigm, mice are trained in a shuttle box Go/NoGo-task to discriminate between rising and falling frequency-modulated tones in order to avoid a mild electric foot-shock. The protocol involves the enrichment of synaptosomes from four brain areas, namely the auditory cortex, frontal cortex, hippocampus, and striatum, at different stages of training. Synaptic protein expression patterns obtained from trained mice are compared to naïve controls using a proteomic approach. To achieve sufficient analytical depth, samples are fractionated in three different ways prior to mass spectrometry, namely 1D SDS-PAGE/in-gel digestion, in-solution digestion and phospho-peptide enrichment. High-resolution proteomic analysis on a mass spectrometer and label-free quantification are used to examine synaptic protein profiles in phospho-peptide-depleted and phospho-peptide-enriched fractions of synaptosomal protein samples. A commercial software package is utilized to reveal proteins and phospho-peptides with significantly regulated relative synaptic abundance levels (trained/naïve controls). Common and differential regulation modes for the synaptic proteome in the investigated brain regions of mice after training were observed. Subsequently, meta-analyses utilizing several databases are employed to identify underlying cellular functions and biological pathways. PMID:28060347

  13. High Resolution Quantitative Synaptic Proteome Profiling of Mouse Brain Regions After Auditory Discrimination Learning.

    PubMed

    Kolodziej, Angela; Smalla, Karl-Heinz; Richter, Sandra; Engler, Alexander; Pielot, Rainer; Dieterich, Daniela C; Tischmeyer, Wolfgang; Naumann, Michael; Kähne, Thilo

    2016-12-15

    The molecular synaptic mechanisms underlying auditory learning and memory remain largely unknown. Here, the workflow of a proteomic study on auditory discrimination learning in mice is described. In this learning paradigm, mice are trained in a shuttle box Go/NoGo-task to discriminate between rising and falling frequency-modulated tones in order to avoid a mild electric foot-shock. The protocol involves the enrichment of synaptosomes from four brain areas, namely the auditory cortex, frontal cortex, hippocampus, and striatum, at different stages of training. Synaptic protein expression patterns obtained from trained mice are compared to naïve controls using a proteomic approach. To achieve sufficient analytical depth, samples are fractionated in three different ways prior to mass spectrometry, namely 1D SDS-PAGE/in-gel digestion, in-solution digestion and phospho-peptide enrichment. High-resolution proteomic analysis on a mass spectrometer and label-free quantification are used to examine synaptic protein profiles in phospho-peptide-depleted and phospho-peptide-enriched fractions of synaptosomal protein samples. A commercial software package is utilized to reveal proteins and phospho-peptides with significantly regulated relative synaptic abundance levels (trained/naïve controls). Common and differential regulation modes for the synaptic proteome in the investigated brain regions of mice after training were observed. Subsequently, meta-analyses utilizing several databases are employed to identify underlying cellular functions and biological pathways.

  14. Dorsal Hippocampus Function in Learning and Expressing a Spatial Discrimination

    ERIC Educational Resources Information Center

    White, Norman M.; Gaskin, Stephane

    2006-01-01

    Learning to discriminate between spatial locations defined by two adjacent arms of a radial maze in the conditioned cue preference paradigm requires two kinds of information: latent spatial learning when the rats explore the maze with no food available, and learning about food availability in two spatial locations when the rats are then confined…

  15. Dorsal Hippocampus Function in Learning and Expressing a Spatial Discrimination

    ERIC Educational Resources Information Center

    White, Norman M.; Gaskin, Stephane

    2006-01-01

    Learning to discriminate between spatial locations defined by two adjacent arms of a radial maze in the conditioned cue preference paradigm requires two kinds of information: latent spatial learning when the rats explore the maze with no food available, and learning about food availability in two spatial locations when the rats are then confined…

  16. Auditory evoked potential: a proposal for further evaluation in children with learning disabilities.

    PubMed

    Frizzo, Ana C F

    2015-01-01

    The information presented in this paper demonstrates the author's experience in previews cross-sectional studies conducted in Brazil, in comparison with the current literature. Over the last 10 years, auditory evoked potential (AEP) has been used in children with learning disabilities. This method is critical to analyze the quality of the processing in time and indicates the specific neural demands and circuits of the sensorial and cognitive process in this clinical population. Some studies with children with dyslexia and learning disabilities were shown here to illustrate the use of AEP in this population.

  17. Learning Disabilities and the Auditory and Visual Matching Computer Program

    ERIC Educational Resources Information Center

    Tormanen, Minna R. K.; Takala, Marjatta; Sajaniemi, Nina

    2008-01-01

    This study examined whether audiovisual computer training without linguistic material had a remedial effect on different learning disabilities, like dyslexia and ADD (Attention Deficit Disorder). This study applied a pre-test-intervention-post-test design with students (N = 62) between the ages of 7 and 19. The computer training lasted eight weeks…

  18. Word learning in deaf children with cochlear implants: effects of early auditory experience

    PubMed Central

    Houston, Derek M.; Stewart, Jessica; Moberly, Aaron; Hollich, George; Miyamoto, Richard T.

    2013-01-01

    Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this task at approximately 18 months of age and older. For deaf children, performance on this task was significantly correlated with early auditory experience: Children whose cochlear implants were switched on by 14 months of age or who had relatively more hearing before implantation demonstrated learning in this task, but later implanted profoundly deaf children did not. Performance on this task also correlated with later measures of vocabulary size. Taken together, these findings suggest that early auditory experience facilitates word learning and that the IPLP may be useful for identifying children who may be at high risk for poor vocabulary development. PMID:22490184

  19. The effect of early auditory experience on the spatial listening skills of children with bilateral cochlear implants.

    PubMed

    Killan, Catherine F; Royle, Nicola; Totten, Catherine L; Raine, Christopher H; Lovett, Rosemary E S

    2015-12-01

    Both electrophysiological and behavioural studies suggest that auditory deprivation during the first months and years of life can impair listening skills. Electrophysiological studies indicate that 3½ years may be a critical age for the development of symmetrical cortical responses in children using bilateral cochlear implants. This study aimed to examine the effect of auditory experience during the first 3½ years of life on the behavioural spatial listening abilities of children using bilateral cochlear implants, with reference to normally hearing children. Data collected during research and routine clinical testing were pooled to compare the listening skills of children with bilateral cochlear implants and different periods of auditory deprivation. Children aged 4-17 years with bilateral cochlear implants were classified into three groups. Children born profoundly deaf were in the congenital early bilateral group (received bilateral cochlear implants aged ≤3½ years, n=28) or congenital late bilateral group (received first implant aged ≤3½ years and second aged >3½ years, n=38). Children with some bilateral acoustic hearing until the age of 3½ years, who subsequently became profoundly deaf and received bilateral cochlear implants, were in the acquired/progressive group (n=16). There were 32 children in the normally hearing group. Children completed tests of sound-source localization and spatial release from masking (a measure of the ability to use both ears to understand speech in noise). The acquired/progressive group localized more accurately than both groups of congenitally deaf children (p<0.05). All three groups of children with cochlear implants showed similar spatial release from masking. The normally hearing group localized more accurately than all groups with bilateral cochlear implants and displayed more spatial release from masking than the congenitally deaf groups on average (p<0.05). Children with bilateral cochlear implants and early

  20. Rapid Auditory Processing and Learning Deficits in Rats With P1 versus P7 Neonatal Hypoxic-Ischemic Injury

    PubMed Central

    McClure, Melissa M.; Threlkeld, Steven W.; Rosen, Glenn D.; Fitch, R. Holly

    2014-01-01

    Hypoxia-ischemia (HI) is associated with premature birth, and injury during term birth. Many infants experiencing HI later show disruptions of language, with research suggesting that rapid auditory processing (RAP) deficits, or impairments in the ability to discriminate rapidly changing acoustic signals, play a causal role in emergent language problems. We recently bridged these lines of research by showing RAP deficits in rats with unilateral-HI injury induced on postnatal day 1, 7, or 10 (P1, P7, or P10; 23). While robust RAP deficits were found in HI animals, it was suggested that our within-age sample size did not provide us with sufficient power to detect Age-at-injury differences within HI groups. The current study sought to examine differences in neuropathology and behavior following unilateral-HI injury in P1 vs. P7 pups. Ages chosen for HI induction reflect differential stages of neurodevelopmental maturity, and subsequent regional differences in vulnerability to reduced blood flow/oxygen (modeling premature/term HI injury). Results showed that during the juvenile period, both P1 and P7 HI groups exhibited significant RAP deficits, but the deficit in the P1 HI group resolved with repeated testing (compared to shams). However, P7 HI animals showed lasting deficits in RAP and spatial learning/memory through adulthood. The current findings are in accord with evidence that HI injury during different stages of developmental maturity (Age-at-injury) leads to differential neuropathologies, and provide the novel observation that in rats, P1 vs. P7 induced pathologies are associated with different patterns of auditory processing and learning/memory deficits across the lifespan. PMID:16765458

  1. Brain dynamics that correlate with effects of learning on auditory distance perception.

    PubMed

    Wisniewski, Matthew G; Mercado, Eduardo; Church, Barbara A; Gramann, Klaus; Makeig, Scott

    2014-01-01

    Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m) and far (30-m) distances. Listeners' accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC) processes identified in electroencephalographic (EEG) data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4-8 Hz power (theta event-related synchronization; ERS) that was smaller after training and largest for backwards speech. For a left temporal cluster, 8-12 Hz decreases in power (alpha event-related desynchronization; ERD) were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10-16 Hz power (upper-alpha/low-beta ERS). The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.

  2. Auditory inspired machine learning techniques can improve speech intelligibility and quality for hearing-impaired listeners.

    PubMed

    Monaghan, Jessica J M; Goehring, Tobias; Yang, Xin; Bolner, Federico; Wang, Shangqiguo; Wright, Matthew C M; Bleeck, Stefan

    2017-03-01

    Machine-learning based approaches to speech enhancement have recently shown great promise for improving speech intelligibility for hearing-impaired listeners. Here, the performance of three machine-learning algorithms and one classical algorithm, Wiener filtering, was compared. Two algorithms based on neural networks were examined, one using a previously reported feature set and one using a feature set derived from an auditory model. The third machine-learning approach was a dictionary-based sparse-coding algorithm. Speech intelligibility and quality scores were obtained for participants with mild-to-moderate hearing impairments listening to sentences in speech-shaped noise and multi-talker babble following processing with the algorithms. Intelligibility and quality scores were significantly improved by each of the three machine-learning approaches, but not by the classical approach. The largest improvements for both speech intelligibility and quality were found by implementing a neural network using the feature set based on auditory modeling. Furthermore, neural network based techniques appeared more promising than dictionary-based, sparse coding in terms of performance and ease of implementation.

  3. At the interface of the auditory and vocal motor systems: NIf and its role in vocal processing, production and learning

    PubMed Central

    Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H.R.; Schmidt, Marc

    2015-01-01

    Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC’s auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf’s involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans. PMID:23603062

  4. At the interface of the auditory and vocal motor systems: NIf and its role in vocal processing, production and learning.

    PubMed

    Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H R; Schmidt, Marc

    2013-06-01

    Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC's auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf's involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans.

  5. Does Size Really Matter? The Role of Tonotopic Map Area Dynamics for Sound Learning in Mouse Auditory Cortex

    PubMed Central

    Brünner, Hans Sperup

    2017-01-01

    Abstract This commentary centers on the novel findings by Shepard et al. (2016) published in eNeuro. The authors interrogated tonotopic map dynamics in auditory cortex (ACtx) by employing a natural sound-learning paradigm, where mothers learn the importance of pup ultrasonic vocalizations (USVs), allowing Shepard et al. to probe the role of map area expansion for auditory learning. They demonstrate that auditory learning in this paradigm does not rely on map expansion but is facilitated by increased inhibition of neurons tuned to low-frequency sounds. Here, we discuss the findings in light of the emerging enthusiasm for cortical inhibitory interneurons for circuit function and hypothesize how a particular interneuron type might be causally involved for the intriguing results obtained by Shepard et al. PMID:28197554

  6. Does Size Really Matter? The Role of Tonotopic Map Area Dynamics for Sound Learning in Mouse Auditory Cortex.

    PubMed

    Brünner, Hans Sperup; Rasmussen, Rune

    2017-01-01

    This commentary centers on the novel findings by Shepard et al. (2016) published in eNeuro. The authors interrogated tonotopic map dynamics in auditory cortex (ACtx) by employing a natural sound-learning paradigm, where mothers learn the importance of pup ultrasonic vocalizations (USVs), allowing Shepard et al. to probe the role of map area expansion for auditory learning. They demonstrate that auditory learning in this paradigm does not rely on map expansion but is facilitated by increased inhibition of neurons tuned to low-frequency sounds. Here, we discuss the findings in light of the emerging enthusiasm for cortical inhibitory interneurons for circuit function and hypothesize how a particular interneuron type might be causally involved for the intriguing results obtained by Shepard et al.

  7. Implicitly learned suppression of irrelevant spatial locations.

    PubMed

    Leber, Andrew B; Gwinn, Rachael E; Hong, Yoolim; O'Toole, Ryan J

    2016-12-01

    How do we ignore a salient, irrelevant stimulus whose location is predictable? A variety of studies using instructional manipulations have shown that participants possess the capacity to exert location-based suppression. However, for the visual search challenges we face in daily life, we are not often provided explicit instructions and are unlikely to consciously deliberate on what our best strategy might be. Instead, we might rely on our past experience-in the form of implicit learning-to exert strategic control. In this paper, we tested whether implicit learning could drive spatial suppression. In Experiment 1, participants searched displays in which one location contained a target, while another contained a salient distractor. An arrow cue pointed to the target location with 70 % validity. Also, unbeknownst to the participants, the same arrow cue predicted the distractor location with 70 % validity. Results showed facilitated RTs to the predicted target location, confirming target enhancement. Critically, distractor interference was reduced at the predicted distractor location, revealing that participants used spatial suppression. Further, we found that participants had no explicit knowledge of the cue-distractor contingencies, confirming that the learning was implicit. In Experiment 2, to seek further evidence for suppression, we modified the task to include occasional masked probes following the arrow cue; we found worse probe identification accuracy at the predicted distractor location than control locations, providing converging evidence that observers spatially suppressed the predicted distractor locations. These results reveal an ecologically desirable mechanism of suppression, which functions without the need for conscious knowledge or externally guided instructions.

  8. Sound Sequence Discrimination Learning Motivated by Reward Requires Dopaminergic D2 Receptor Activation in the Rat Auditory Cortex

    ERIC Educational Resources Information Center

    Kudoh, Masaharu; Shibuki, Katsuei

    2006-01-01

    We have previously reported that sound sequence discrimination learning requires cholinergic inputs to the auditory cortex (AC) in rats. In that study, reward was used for motivating discrimination behavior in rats. Therefore, dopaminergic inputs mediating reward signals may have an important role in the learning. We tested the possibility in the…

  9. Sound Sequence Discrimination Learning Motivated by Reward Requires Dopaminergic D2 Receptor Activation in the Rat Auditory Cortex

    ERIC Educational Resources Information Center

    Kudoh, Masaharu; Shibuki, Katsuei

    2006-01-01

    We have previously reported that sound sequence discrimination learning requires cholinergic inputs to the auditory cortex (AC) in rats. In that study, reward was used for motivating discrimination behavior in rats. Therefore, dopaminergic inputs mediating reward signals may have an important role in the learning. We tested the possibility in the…

  10. Spatial learning efficiency and error monitoring in normal aging: an investigation using a novel hidden maze learning test.

    PubMed

    Pietrzak, Robert H; Cohen, Henri; Snyder, Peter J

    2007-02-01

    This study compared 19 older adults and 20 younger adults on the Groton Maze Learning Test((c)) (GMLT), a novel computerized hidden maze learning test that assesses processing speed, spatial learning efficiency, and error monitoring. Convergent validity of this test was assessed by comparing GMLT scores to Paced Auditory Serial Addition Test (PASAT) and Tower of Toronto (TOT) scores. In the full sample, all GMLT measures correlated strongly with both PASAT and TOT scores (r's=0.53 to 0.73). GMLT measures most sensitive to detecting between-group differences were the Timed Chase Test (TCT), legal errors, and perseverative errors (Cohen's d's=3.81, 2.40, and 2.40, respectively). Scores on the visuomotor processing speed subtest of the GMLT attenuated the relationship between age group and maze efficiency index scores, but not perseverative and "rule-break" errors. These results suggest that normal aging is associated with impaired performance on a novel computerized measure of spatial learning efficiency and error monitoring, and that processing speed attenuates the relationship between age and spatial learning efficiency.

  11. Dissociation of Neural Networks for Predisposition and for Training-Related Plasticity in Auditory-Motor Learning

    PubMed Central

    Herholz, Sibylle C.; Coffey, Emily B.J.; Pantev, Christo; Zatorre, Robert J.

    2016-01-01

    Skill learning results in changes to brain function, but at the same time individuals strongly differ in their abilities to learn specific skills. Using a 6-week piano-training protocol and pre- and post-fMRI of melody perception and imagery in adults, we dissociate learning-related patterns of neural activity from pre-training activity that predicts learning rates. Fronto-parietal and cerebellar areas related to storage of newly learned auditory-motor associations increased their response following training; in contrast, pre-training activity in areas related to stimulus encoding and motor control, including right auditory cortex, hippocampus, and caudate nuclei, was predictive of subsequent learning rate. We discuss the implications of these results for models of perceptual and of motor learning. These findings highlight the importance of considering individual predisposition in plasticity research and applications. PMID:26139842

  12. Attention Cueing and Activity Equally Reduce False Alarm Rate in Visual-Auditory Associative Learning through Improving Memory

    PubMed Central

    Haghgoo, Hojjat Allah; Azizi, Solmaz; Nili Ahmadabadi, Majid

    2016-01-01

    In our daily life, we continually exploit already learned multisensory associations and form new ones when facing novel situations. Improving our associative learning results in higher cognitive capabilities. We experimentally and computationally studied the learning performance of healthy subjects in a visual-auditory sensory associative learning task across active learning, attention cueing learning, and passive learning modes. According to our results, the learning mode had no significant effect on learning association of congruent pairs. In addition, subjects’ performance in learning congruent samples was not correlated with their vigilance score. Nevertheless, vigilance score was significantly correlated with the learning performance of the non-congruent pairs. Moreover, in the last block of the passive learning mode, subjects significantly made more mistakes in taking non-congruent pairs as associated and consciously reported lower confidence. These results indicate that attention and activity equally enhanced visual-auditory associative learning for non-congruent pairs, while false alarm rate in the passive learning mode did not decrease after the second block. We investigated the cause of higher false alarm rate in the passive learning mode by using a computational model, composed of a reinforcement learning module and a memory-decay module. The results suggest that the higher rate of memory decay is the source of making more mistakes and reporting lower confidence in non-congruent pairs in the passive learning mode. PMID:27314235

  13. Synaptic proteome changes in mouse brain regions upon auditory discrimination learning.

    PubMed

    Kähne, Thilo; Kolodziej, Angela; Smalla, Karl-Heinz; Eisenschmidt, Elke; Haus, Utz-Uwe; Weismantel, Robert; Kropf, Siegfried; Wetzel, Wolfram; Ohl, Frank W; Tischmeyer, Wolfgang; Naumann, Michael; Gundelfinger, Eckart D

    2012-08-01

    Changes in synaptic efficacy underlying learning and memory processes are assumed to be associated with alterations of the protein composition of synapses. Here, we performed a quantitative proteomic screen to monitor changes in the synaptic proteome of four brain areas (auditory cortex, frontal cortex, hippocampus striatum) during auditory learning. Mice were trained in a shuttle box GO/NO-GO paradigm to discriminate between rising and falling frequency modulated tones to avoid mild electric foot shock. Control-treated mice received corresponding numbers of either the tones or the foot shocks. Six hours and 24 h later, the composition of a fraction enriched in synaptic cytomatrix-associated proteins was compared to that obtained from naïve mice by quantitative mass spectrometry. In the synaptic protein fraction obtained from trained mice, the average percentage (±SEM) of downregulated proteins (59.9 ± 0.5%) exceeded that of upregulated proteins (23.5 ± 0.8%) in the brain regions studied. This effect was significantly smaller in foot shock (42.7 ± 0.6% down, 40.7 ± 1.0% up) and tone controls (43.9 ± 1.0% down, 39.7 ± 0.9% up). These data suggest that learning processes initially induce removal and/or degradation of proteins from presynaptic and postsynaptic cytoskeletal matrices before these structures can acquire a new, postlearning organisation. In silico analysis points to a general role of insulin-like signalling in this process.

  14. Enhanced cognitive flexibility in reversal learning induced by removal of the extracellular matrix in auditory cortex.

    PubMed

    Happel, Max F K; Niekisch, Hartmut; Castiblanco Rivera, Laura L; Ohl, Frank W; Deliano, Matthias; Frischknecht, Renato

    2014-02-18

    During brain maturation, the occurrence of the extracellular matrix (ECM) terminates juvenile plasticity by mediating structural stability. Interestingly, enzymatic removal of the ECM restores juvenile forms of plasticity, as for instance demonstrated by topographical reconnectivity in sensory pathways. However, to which degree the mature ECM is a compromise between stability and flexibility in the adult brain impacting synaptic plasticity as a fundamental basis for learning, lifelong memory formation, and higher cognitive functions is largely unknown. In this study, we removed the ECM in the auditory cortex of adult Mongolian gerbils during specific phases of cortex-dependent auditory relearning, which was induced by the contingency reversal of a frequency-modulated tone discrimination, a task requiring high behavioral flexibility. We found that ECM removal promoted a significant increase in relearning performance, without erasing already established-that is, learned-capacities when continuing discrimination training. The cognitive flexibility required for reversal learning of previously acquired behavioral habits, commonly understood to mainly rely on frontostriatal circuits, was enhanced by promoting synaptic plasticity via ECM removal within the sensory cortex. Our findings further suggest experimental modulation of the cortical ECM as a tool to open short-term windows of enhanced activity-dependent reorganization allowing for guided neuroplasticity.

  15. Human brainstem plasticity: the interaction of stimulus probability and auditory learning.

    PubMed

    Skoe, Erika; Chandrasekaran, Bharath; Spitzer, Emily R; Wong, Patrick C M; Kraus, Nina

    2014-03-01

    Two forms of brainstem plasticity are known to occur: an immediate stimulus probability-based and learning-dependent plasticity. Whether these kinds of plasticity interact is unknown. We examined this question in a training experiment involving three phases: (1) an initial baseline measurement, (2) a 9-session training paradigm, and (3) a retest measurement. At the outset of the experiment, auditory brainstem responses (ABR) were recorded to two unfamiliar pitch patterns presented in an oddball paradigm. Then half the participants underwent sound-to-meaning training where they learned to match these pitch patterns to novel words, with the remaining participants serving as controls who received no auditory training. Nine days after the baseline measurement, the pitch patterns were re-presented to all participants using the same oddball paradigm. Analysis of the baseline recordings revealed an effect of probability: when a sound was presented infrequently, the pitch contour was represented less accurately in the ABR than when it was presented frequently. After training, pitch tracking was more accurate for infrequent sounds, particularly for the pitch pattern that was encoded more poorly pre-training. However, the control group was stable over the same interval. Our results provide evidence that probability-based and learning-dependent plasticity interact in the brainstem.

  16. Spatial learning by mice in three dimensions

    PubMed Central

    Wilson, Jonathan J.; Harding, Elizabeth; Fortier, Mathilde; James, Benjamin; Donnett, Megan; Kerslake, Alasdair; O’Leary, Alice; Zhang, Ningyu; Jeffery, Kate

    2015-01-01

    We tested whether mice can represent locations distributed throughout three-dimensional space, by developing a novel three-dimensional radial arm maze. The three-dimensional radial maze, or “radiolarian” maze, consists of a central spherical core from which arms project in all directions. Mice learn to retrieve food from the ends of the arms without omitting any arms or re-visiting depleted ones. We show here that mice can learn both a standard working memory task, in which all arms are initially baited, and also a reference memory version in which only a subset are ever baited. Comparison with a two-dimensional analogue of the radiolarian maze, the hexagon maze, revealed equally good working-memory performance in both mazes if all the arms were initially baited, but reduced working and reference memory in the partially baited radiolarian maze. This suggests intact three-dimensional spatial representation in mice over short timescales but impairment of the formation and/or use of long-term spatial memory of the maze. We discuss potential mechanisms for how mice solve the three-dimensional task, and reasons for the impairment relative to its two-dimensional counterpart, concluding with some speculations about how mammals may represent three-dimensional space. PMID:25930216

  17. Generalization of sensory auditory learning to top-down skills in a randomized controlled trial.

    PubMed

    Murphy, Cristina B; Peres, Andressa K; Zachi, Elaine C; Ventura, Dora F; Pagan-Neves, Luciana; Wertzner, Haydee F; Schochat, Eliane

    2015-01-01

    Research has shown that auditory training improves auditory sensory skills; however, it is unclear whether this improvement is transferred to top-down skills, such as memory, attention, and language, and whether it depends on group characteristics in regard to memory and attention skills. The primary goal of this research was to investigate the generalization of learning from auditory sensory skills to top-down skills such as memory, attention, and language. We also aimed to compare whether this generalization process occurs in the same way among typically developing children and children with speech sound disorder. This study was a randomized controlled trial. Typically developing 7- to 12-yr-old children and children with speech sound disorder were separated into four groups: a trained control group (TDT; n = 10, age 9.6 ± 2.0 yr), a nontrained control group (TDNT; n = 11, age 8.2 ± 1.6 yr), a trained study group (SSDT; n = 10, age 7.7 ± 1.2 yr), and a nontrained study group (SSDNT; n = 8, age 8.6 ± 1.2 yr). Both trained groups underwent a computerized, nonverbal auditory training that focused on frequency discrimination, ordering, and backward-masking tasks. The training consisted of twelve 45 min sessions, once a week, for a total of 9 hr of training, approximately. Near-transfer (Gap-In-Noise [GIN] and Frequency Pattern Test) and far-transfer measures (auditory and visual sustained attention tests, phonological working memory and language tests) were applied before and after training. The results were analyzed using a 2 × 2 × 2 mixed-model analysis of variance with the group and training as the between-group variables and the period as the within-group variable. The significance threshold was p ≤ 0.05. There was a group × period × training interaction for GIN [F(1.35) = 7.18, p = 0.011], indicating a significant threshold reduction only for the TDT group (Tukey multiple comparisons). There was a significant group × period interaction [F(1.35) = 5

  18. Auditory processing disorder in patients with language-learning impairment and correlation with malformation of cortical development.

    PubMed

    Boscariol, Mirela; Guimarães, Catarina Abraão; Hage, Simone R de Vasconcellos; Garcia, Vera Lucia; Schmutzler, Kátia M R; Cendes, Fernando; Guerreiro, Marilisa Mantovani

    2011-11-01

    Malformations of cortical development have been described in children and families with language-learning impairment. The objective of this study was to assess the auditory processing information in children with language-learning impairment in the presence or absence of a malformation of cortical development in the auditory processing areas. We selected 32 children (19 males), aged eight to 15 years, divided into three groups: Group I comprised 11 children with language-learning impairment and bilateral perisylvian polymicrogyria, Group II comprised 10 children with language-learning impairment and normal MRI, and Group III comprised 11 normal children. Behavioral auditory tests, such as the Random Gap Detection Test and Digits Dichotic Test were performed. Statistical analysis was performed using the Kruskal-Wallis test and Mann-Whitney test, with a level of significance of 0.05. The results revealed a statistically significant difference among the groups. Our data showed abnormalities in auditory processing of children in Groups I and II when compared with the control group, with children in Group I being more affected than children in Group II. Our data showed that the presence of a cortical malformation correlates with a worse performance in some tasks of auditory processing function.

  19. Assessing Spatial Learning and Memory in Rodents

    PubMed Central

    Vorhees, Charles V.; Williams, Michael T.

    2014-01-01

    Maneuvering safely through the environment is central to survival of almost all species. The ability to do this depends on learning and remembering locations. This capacity is encoded in the brain by two systems: one using cues outside the organism (distal cues), allocentric navigation, and one using self-movement, internal cues and nearby proximal cues, egocentric navigation. Allocentric navigation involves the hippocampus, entorhinal cortex, and surrounding structures; in humans this system encodes allocentric, semantic, and episodic memory. This form of memory is assessed in laboratory animals in many ways, but the dominant form of assessment is the Morris water maze (MWM). Egocentric navigation involves the dorsal striatum and connected structures; in humans this system encodes routes and integrated paths and, when overlearned, becomes procedural memory. In this article, several allocentric assessment methods for rodents are reviewed and compared with the MWM. MWM advantages (little training required, no food deprivation, ease of testing, rapid and reliable learning, insensitivity to differences in body weight and appetite, absence of nonperformers, control methods for proximal cue learning, and performance effects) and disadvantages (concern about stress, perhaps not as sensitive for working memory) are discussed. Evidence-based design improvements and testing methods are reviewed for both rats and mice. Experimental factors that apply generally to spatial navigation and to MWM specifically are considered. It is concluded that, on balance, the MWM has more advantages than disadvantages and compares favorably with other allocentric navigation tasks. PMID:25225309

  20. Physical fitness modulates incidental but not intentional statistical learning of simultaneous auditory sequences during concurrent physical exercise.

    PubMed

    Daikoku, Tatsuya; Takahashi, Yuji; Futagami, Hiroko; Tarumoto, Nagayoshi; Yasuda, Hideki

    2017-02-01

    In real-world auditory environments, humans are exposed to overlapping auditory information such as those made by human voices and musical instruments even during routine physical activities such as walking and cycling. The present study investigated how concurrent physical exercise affects performance of incidental and intentional learning of overlapping auditory streams, and whether physical fitness modulates the performances of learning. Participants were grouped with 11 participants with lower and higher fitness each, based on their Vo2max value. They were presented simultaneous auditory sequences with a distinct statistical regularity each other (i.e. statistical learning), while they were pedaling on the bike and seating on a bike at rest. In experiment 1, they were instructed to attend to one of the two sequences and ignore to the other sequence. In experiment 2, they were instructed to attend to both of the two sequences. After exposure to the sequences, learning effects were evaluated by familiarity test. In the experiment 1, performance of statistical learning of ignored sequences during concurrent pedaling could be higher in the participants with high than low physical fitness, whereas in attended sequence, there was no significant difference in performance of statistical learning between high than low physical fitness. Furthermore, there was no significant effect of physical fitness on learning while resting. In the experiment 2, the both participants with high and low physical fitness could perform intentional statistical learning of two simultaneous sequences in the both exercise and rest sessions. The improvement in physical fitness might facilitate incidental but not intentional statistical learning of simultaneous auditory sequences during concurrent physical exercise.

  1. Spatial short-term memory in children with nonverbal learning disabilities: impairment in encoding spatial configuration.

    PubMed

    Narimoto, Tadamasa; Matsuura, Naomi; Takezawa, Tomohiro; Mitsuhashi, Yoshinori; Hiratani, Michio

    2013-01-01

    The authors investigated whether impaired spatial short-term memory exhibited by children with nonverbal learning disabilities is due to a problem in the encoding process. Children with or without nonverbal learning disabilities performed a simple spatial test that required them to remember 3, 5, or 7 spatial items presented simultaneously in random positions (i.e., spatial configuration) and to decide if a target item was changed or all items including the target were in the same position. The results showed that, even when the spatial positions in the encoding and probe phases were similar, the mean proportion correct of children with nonverbal learning disabilities was 0.58 while that of children without nonverbal learning disabilities was 0.84. The authors argue with the results that children with nonverbal learning disabilities have difficulty encoding relational information between spatial items, and that this difficulty is responsible for their impaired spatial short-term memory.

  2. Preconditioning of Spatial and Auditory Cues: Roles of the Hippocampus, Frontal Cortex, and Cue-Directed Attention

    PubMed Central

    Talk, Andrew C.; Grasby, Katrina L.; Rawson, Tim; Ebejer, Jane L.

    2016-01-01

    Loss of function of the hippocampus or frontal cortex is associated with reduced performance on memory tasks, in which subjects are incidentally exposed to cues at specific places in the environment and are subsequently asked to recollect the location at which the cue was experienced. Here, we examined the roles of the rodent hippocampus and frontal cortex in cue-directed attention during encoding of memory for the location of a single incidentally experienced cue. During a spatial sensory preconditioning task, rats explored an elevated platform while an auditory cue was incidentally presented at one corner. The opposite corner acted as an unpaired control location. The rats demonstrated recollection of location by avoiding the paired corner after the auditory cue was in turn paired with shock. Damage to either the dorsal hippocampus or the frontal cortex impaired this memory ability. However, we also found that hippocampal lesions enhanced attention directed towards the cue during the encoding phase, while frontal cortical lesions reduced cue-directed attention. These results suggest that the deficit in spatial sensory preconditioning caused by frontal cortical damage may be mediated by inattention to the location of cues during the latent encoding phase, while deficits following hippocampal damage must be related to other mechanisms such as generation of neural plasticity. PMID:27999366

  3. Cross-modal compatibility effects with visual-spatial and auditory-verbal stimulus and response sets.

    PubMed

    Proctor, R W; Dutta, A; Kelly, P L; Weeks, D J

    1994-01-01

    Within the visual-spatial and auditory-verbal modalities, reaction times to a stimulus have been shown to be faster if salient features of the stimulus and response sets correspond than if they do not. Accounts that attribute such stimulus-response compatibility effects to general translation processes predict that similar effects should occur for cross-modal stimulus and response sets. To test this prediction, three experiments were conducted examining four-choice reactions with (1) visual spatial-location stimuli assigned to speech responses, (2) speech stimuli assigned to keypress responses, and (3) symbolic visual stimuli assigned to speech responses. In all the experiments, responses were faster when correspondence between salient features of the stimulus and response sets was maintained, demonstrating that similar principles of translation operate both within and across modalities.

  4. [Identification of auditory laterality by means of a new dichotic digit test in Spanish, and body laterality and spatial orientation in children with dyslexia and in controls].

    PubMed

    Olivares-García, M R; Peñaloza-López, Y R; García-Pedroza, F; Jesús-Pérez, S; Uribe-Escamilla, R; Jiménez-de la Sancha, S

    In this study, a new dichotic digit test in Spanish (NDDTS) was applied in order to identify auditory laterality. We also evaluated body laterality and spatial location using the Subirana test. Both the dichotic test and the Subirana test for body laterality and spatial location were applied in a group of 40 children with dyslexia and in a control group made up of 40 children who were paired according to age and gender. The results of the three evaluations were analysed using the SPSS 10 software application, with Pearson's chi-squared test. It was seen that 42.5% of the children in the group of dyslexics had mixed auditory laterality, compared to 7.5% in the control group (p < or = 0.05). Body laterality was mixed in 25% of dyslexic children and in 2.5% in the control group (p < or = 0.05) and there was 72.5% spatial disorientation in the group of dyslexics, whereas only 15% (p < or = 0.05) was found in the control group. The NDDTS proved to be a useful tool for demonstrating that mixed auditory laterality and auditory predominance of the left ear are linked to dyslexia. The results of this test exceed those obtained for body laterality. Spatial orientation is indeed altered in children with dyslexia. The importance of this finding makes it necessary to study the central auditory processes in all cases in order to define better rehabilitation strategies in Spanish-speaking children.

  5. Development and evaluation of the LiSN & learn auditory training software for deficit-specific remediation of binaural processing deficits in children: preliminary findings.

    PubMed

    Cameron, Sharon; Dillon, Harvey

    2011-01-01

    The LiSN & Learn auditory training software was developed specifically to improve binaural processing skills in children with suspected central auditory processing disorder who were diagnosed as having a spatial processing disorder (SPD). SPD is defined here as a condition whereby individuals are deficient in their ability to use binaural cues to selectively attend to sounds arriving from one direction while simultaneously suppressing sounds arriving from another. As a result, children with SPD have difficulty understanding speech in noisy environments, such as in the classroom. To develop and evaluate the LiSN & Learn auditory training software for children diagnosed with the Listening in Spatialized Noise-Sentences Test (LiSN-S) as having an SPD. The LiSN-S is an adaptive speech-in-noise test designed to differentially diagnose spatial and pitch-processing deficits in children with suspected central auditory processing disorder. Participants were nine children (aged between 6 yr, 9 mo, and 11 yr, 4 mo) who performed outside normal limits on the LiSN-S. In a pre-post study of treatment outcomes, participants trained on the LiSN & Learn for 15 min per day for 12 weeks. Participants acted as their own control. Participants were assessed on the LiSN-S, as well as tests of attention and memory and a self-report questionnaire of listening ability. Performance on all tasks was reassessed after 3 mo where no further training occurred. The LiSN & Learn produces a three-dimensional auditory environment under headphones on the user's home computer. The child's task was to identify a word from a target sentence presented in background noise. A weighted up-down adaptive procedure was used to adjust the signal level of the target based on the participant's response. On average, speech reception thresholds on the LiSN & Learn improved by 10 dB over the course of training. As hypothesized, there were significant improvements in posttraining performance on the LiSN-S conditions

  6. Sensory Noise Explains Auditory Frequency Discrimination Learning Induced by Training with Identical Stimuli

    PubMed Central

    Micheyl, Christophe; McDermott, Josh H.; Oxenham, Andrew J.

    2010-01-01

    Thresholds in various visual and auditory perception tasks have been found to improve markedly with practice at intermediate levels of task difficulty. Recently, however, there have been reports that training with identical stimuli, which by definition were impossible to discriminate correctly beyond chance, could induce as much discrimination learning as training with different stimuli. These surprising findings have been interpreted as evidence that discrimination learning can occur in the absence of perceived differences between stimuli and need not involve the fine-tuning of a discrimination mechanism. Here we show that these counterintuitive findings of “discrimination learning without discrimination” can be understood simply by considering the effect of internal noise on sensory representations. Because of such noise, physically identical stimuli are unlikely to be perceived as being strictly identical. Given empirically derived levels of sensory noise, we show that perceived differences evoked by identical stimuli are actually not much smaller than those induced by the physical differences typically used in discrimination learning experiments. We suggest that findings of discrimination learning with identical stimuli can be explained without implicating any fundamentally new learning mechanism. PMID:19304592

  7. Neural Changes Associated with Nonspeech Auditory Category Learning Parallel Those of Speech Category Acquisition

    PubMed Central

    Liu, Ran; Holt, Lori L.

    2010-01-01

    Native language experience plays a critical role in shaping speech categorization, but the exact mechanisms by which it does so are not well understood. Investigating category learning of nonspeech sounds with which listeners have no prior experience allows their experience to be systematically controlled in a way that is impossible to achieve by studying natural speech acquisition, and it provides a means of probing the boundaries and constraints that general auditory perception and cognition bring to the task of speech category learning. In this study, we used a multimodal, video-game-based implicit learning paradigm to train participants to categorize acoustically complex, nonlinguistic sounds. Mismatch negativity responses to the nonspeech stimuli were collected before and after training to investigate the degree to which neural changes supporting the learning of these nonspeech categories parallel those typically observed for speech category acquisition. Results indicate that changes in mismatch negativity resulting from the nonspeech category learning closely resemble patterns of change typically observed during speech category learning. This suggests that the often-observed “specialized” neural responses to speech sounds may result, at least in part, from the expertise we develop with speech categories through experience rathr than from properties unique to speech (e.g., linguistic or vocal tract gestural information). Furthermore, particular characteristics of the training paradigm may inform our understanding of mechanisms that support natural speech acquisition. PMID:19929331

  8. The Application of an Animal Auditory Training Method as an Interchangeable Auditory Processing Learning Method for Children with Autism

    ERIC Educational Resources Information Center

    Adams, Deborah L.

    2012-01-01

    While the prevalence of autism continues to increase, there is a growing need for techniques that facilitate teaching this challenging population. The use of visual systems and prompting has been prevalent as well as effective; however, the use of auditory systems has been lacking in investigation. Ten children between the chronological ages of 4…

  9. The Application of an Animal Auditory Training Method as an Interchangeable Auditory Processing Learning Method for Children with Autism

    ERIC Educational Resources Information Center

    Adams, Deborah L.

    2012-01-01

    While the prevalence of autism continues to increase, there is a growing need for techniques that facilitate teaching this challenging population. The use of visual systems and prompting has been prevalent as well as effective; however, the use of auditory systems has been lacking in investigation. Ten children between the chronological ages of 4…

  10. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories.

    PubMed

    Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J

    2015-06-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    PubMed Central

    Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.

    2015-01-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721

  12. Auditory Same/Different Concept Learning and Generalization in Black-Capped Chickadees (Poecile atricapillus)

    PubMed Central

    Hoeschele, Marisa; Cook, Robert G.; Guillette, Lauren M.; Hahn, Allison H.; Sturdy, Christopher B.

    2012-01-01

    Abstract concept learning was thought to be uniquely human, but has since been observed in many other species. Discriminating same from different is one abstract relation that has been studied frequently. In the current experiment, using operant conditioning, we tested whether black-capped chickadees (Poecile atricapillus) could discriminate sets of auditory stimuli based on whether all the sounds within a sequence were the same or different from one another. The chickadees were successful at solving this same/different relational task, and transferred their learning to same/different sequences involving novel combinations of training notes and novel notes within the range of pitches experienced during training. The chickadees showed limited transfer to pitches that was not used in training, suggesting that the processing of absolute pitch may constrain their relational performance. Our results indicate, for the first time, that black-capped chickadees readily form relational auditory same and different categories, adding to the list of perceptual, behavioural, and cognitive abilities that make this species an important comparative model for human language and cognition. PMID:23077660

  13. Enhanced cognitive flexibility in reversal learning induced by removal of the extracellular matrix in auditory cortex

    PubMed Central

    Happel, Max F. K.; Niekisch, Hartmut; Castiblanco Rivera, Laura L.; Ohl, Frank W.; Deliano, Matthias; Frischknecht, Renato

    2014-01-01

    During brain maturation, the occurrence of the extracellular matrix (ECM) terminates juvenile plasticity by mediating structural stability. Interestingly, enzymatic removal of the ECM restores juvenile forms of plasticity, as for instance demonstrated by topographical reconnectivity in sensory pathways. However, to which degree the mature ECM is a compromise between stability and flexibility in the adult brain impacting synaptic plasticity as a fundamental basis for learning, lifelong memory formation, and higher cognitive functions is largely unknown. In this study, we removed the ECM in the auditory cortex of adult Mongolian gerbils during specific phases of cortex-dependent auditory relearning, which was induced by the contingency reversal of a frequency-modulated tone discrimination, a task requiring high behavioral flexibility. We found that ECM removal promoted a significant increase in relearning performance, without erasing already established—that is, learned—capacities when continuing discrimination training. The cognitive flexibility required for reversal learning of previously acquired behavioral habits, commonly understood to mainly rely on frontostriatal circuits, was enhanced by promoting synaptic plasticity via ECM removal within the sensory cortex. Our findings further suggest experimental modulation of the cortical ECM as a tool to open short-term windows of enhanced activity-dependent reorganization allowing for guided neuroplasticity. PMID:24550310

  14. Hippocampal long-term depression is facilitated by the acquisition and updating of memory of spatial auditory content and requires mGlu5 activation.

    PubMed

    Dietz, Birte; Manahan-Vaughan, Denise

    2017-03-15

    Long-term potentiation (LTP) and long-term depression (LTD) are key cellular processes that support memory formation. Whereas increases of synaptic strength by means of LTP may support the creation of a spatial memory 'engram', LTD appears to play an important role in refining and optimising experience-dependent encoding. A differentiation in the role of hippocampal subfields is apparent. For example, LTD in the dentate gyrus (DG) is enabled by novel learning about large visuospatial features, whereas in area CA1, it is enabled by learning about discrete aspects of spatial content, whereby, both discrete visuospatial and olfactospatial cues trigger LTD in CA1. Here, we explored to what extent local audiospatial cues facilitate information encoding in the form of LTD in these subfields. Coupling of low frequency afferent stimulation (LFS) with discretely localised, novel auditory tones in the sonic hearing, or ultrasonic range, facilitated short-term depression (STD) into LTD (>24 h) in CA1, but not DG. Re-exposure to the now familiar audiospatial configuration ca. 1 week later failed to enhance STD. Reconfiguration of the same audiospatial cues resulted anew in LTD when ultrasound, but not non-ultrasound cues were used. LTD facilitation that was triggered by novel exposure to spatially arranged tones, or to spatial reconfiguration of the same tones were both prevented by an antagonism of the metabotropic glutamate receptor, mGlu5. These data indicate that, if behaviourally salient enough, the hippocampus can use audiospatial cues to facilitate LTD that contributes to the encoding and updating of spatial representations. Effects are subfield-specific, and require mGlu5 activation, as is the case for visuospatial information processing. These data reinforce the likelihood that LTD supports the encoding of spatial features, and that this occurs in a qualitative and subfield-specific manner. They also support that mGlu5 is essential for synaptic encoding of spatial

  15. Spatially Distributed Instructions Improve Learning Outcomes and Efficiency

    ERIC Educational Resources Information Center

    Jang, Jooyoung; Schunn, Christian D.; Nokes, Timothy J.

    2011-01-01

    Learning requires applying limited working memory and attentional resources to intrinsic, germane, and extraneous aspects of the learning task. To reduce the especially undesirable extraneous load aspects of learning environments, cognitive load theorists suggest that spatially integrated learning materials should be used instead of spatially…

  16. Spatially Distributed Instructions Improve Learning Outcomes and Efficiency

    ERIC Educational Resources Information Center

    Jang, Jooyoung; Schunn, Christian D.; Nokes, Timothy J.

    2011-01-01

    Learning requires applying limited working memory and attentional resources to intrinsic, germane, and extraneous aspects of the learning task. To reduce the especially undesirable extraneous load aspects of learning environments, cognitive load theorists suggest that spatially integrated learning materials should be used instead of spatially…

  17. Testing the Role of Dorsal Premotor Cortex in Auditory-Motor Association Learning Using Transcranical Magnetic Stimulation (TMS)

    PubMed Central

    Lega, Carlotta; Stephan, Marianne A.; Zatorre, Robert J.; Penhune, Virginia

    2016-01-01

    Interactions between the auditory and the motor systems are critical in music as well as in other domains, such as speech. The premotor cortex, specifically the dorsal premotor cortex (dPMC), seems to play a key role in auditory-motor integration, and in mapping the association between a sound and the movement used to produce it. In the present studies we tested the causal role of the dPMC in learning and applying auditory-motor associations using 1 Hz repetitive Transcranical Magnetic Stimulation (rTMS). In this paradigm, non-musicians learn a set of auditory-motor associations through melody training in two contexts: first when the sound to key-press mapping was in a conventional sequential order (low to high tones mapped onto keys from left to right), and then when it was in a novel scrambled order. Participant’s ability to match the four pitches to four computer keys was tested before and after the training. In both experiments, the group that received 1 Hz rTMS over the dPMC showed no significant improvement on the pitch-matching task following training, whereas the control group (who received rTMS to visual cortex) did. Moreover, in Experiment 2 where the pitch-key mapping was novel, rTMS over the dPMC also interfered with learning. These findings suggest that rTMS over dPMC disturbs the formation of auditory-motor associations, especially when the association is novel and must be learned rather explicitly. The present results contribute to a better understanding of the role of dPMC in auditory-motor integration, suggesting a critical role of dPMC in learning the link between an action and its associated sound. PMID:27684369

  18. Mismatch Negativity is a Sensitive and Predictive Biomarker of Perceptual Learning During Auditory Cognitive Training in Schizophrenia.

    PubMed

    Perez, Veronica B; Tarasenko, Melissa; Miyakoshi, Makoto; Pianka, Sean T; Makeig, Scott D; Braff, David L; Swerdlow, Neal R; Light, Gregory A

    2017-03-22

    Computerized cognitive training is gaining empirical support for use in the treatment of schizophrenia (SZ). Although cognitive training is efficacious for SZ at a group level when delivered in sufficiently intensive doses (eg, 30-50 h), there is variability in individual patient response. The identification of biomarkers sensitive to the neural systems engaged by cognitive training interventions early in the course of treatment could facilitate personalized assignment to treatment. This proof-of-concept study was conducted to determine whether mismatch negativity (MMN), an event-related potential index of auditory sensory discrimination associated with cognitive and psychosocial functioning, would predict gains in auditory perceptual learning and exhibit malleability after initial exposure to the early stages of auditory cognitive training in SZ. MMN was assessed in N=28 SZ patients immediately before and after completing 1 h of a speeded time-order judgment task of two successive frequency-modulated sweeps (Posit Science 'Sound Sweeps' exercise). All SZ patients exhibited the expected improvements in auditory perceptual learning over the 1 h training period (p<0.001), consistent with previous results. Larger MMN amplitudes recorded both before and after the training exercises were associated with greater gains in auditory perceptual learning (r=-0.5 and r=-0.67, respectively, p's<0.01). Significant pretraining vs posttraining MMN amplitude reduction was also observed (p<0.02). MMN is a sensitive index of the neural systems engaged in a single session of auditory cognitive training in SZ. These findings encourage future trials of MMN as a biomarker for individual assignment, prediction, and/or monitoring of patient response to procognitive interventions, including auditory cognitive training in SZ.Neuropsychopharmacology advance online publication, 22 March 2017; doi:10.1038/npp.2017.25.

  19. Rey Auditory Verbal Learning Test performance of a Federal corrections sample with acquired immunodeficiency syndrome.

    PubMed

    Ryan, J J; Paolo, A M; Skrade, M

    1992-01-01

    The Rey Auditory-Verbal Learning Test (AVLT) was administered to 30 inmates from three United States Federal corrections facilities. Fifteen were HIV seropositive and carried a diagnosis of AIDS; 15 were seronegative controls. The groups were comparable in age, education, sex, estimated premorbid IQ, and ethnic make-up. Both groups learned across trials, and produced similar acquisition curves. They also showed equivalent registration, but controls performed significantly better than subjects with AIDS on AVLT Trials II, IV, V, Recognition, and sum of I through V. AIDS subjects made significantly more intrusion errors than controls, suggesting that seropositive inmates performed more poorly, at least in part, because they experienced difficulty discriminating relevant from irrelevant responses during recall. Evaluation of serial position effects suggested that AIDS subjects experienced recall problems only with the middle segment of the word list. This finding may be unique to persons with AIDS and is consistent with the view that distinct clinical groups produce different recall patterns.

  20. Anatomical traces of juvenile learning in the auditory system of adult barn owls.

    PubMed

    Linkenhoker, Brie Ann; von der Ohe, Christina G; Knudsen, Eric I

    2005-01-01

    Early experience plays a powerful role in shaping adult neural circuitry and behavior. In barn owls, early experience markedly influences sound localization. Juvenile owls that learn new, abnormal associations between auditory cues and locations in visual space as a result of abnormal visual experience can readapt to the same abnormal experience in adulthood, when plasticity is otherwise limited. Here we show that abnormal anatomical projections acquired during early abnormal sensory experience persist long after normal experience has been restored. These persistent projections are perfectly situated to provide a physical framework for subsequent readaptation in adulthood to the abnormal sensory conditions experienced in early life. Our results show that anatomical changes that support strong learned neural connections early in life can persist even after they are no longer functionally expressed. This maintenance of silenced neural circuitry that was once adaptive may represent an important mechanism by which the brain preserves a record of early experience.

  1. Selective importance of the rat anterior thalamic nuclei for configural learning involving distal spatial cues

    PubMed Central

    Dumont, Julie R; Amin, Eman; Aggleton, John P

    2013-01-01

    To test potential parallels between hippocampal and anterior thalamic function, rats with anterior thalamic lesions were trained on a series of biconditional learning tasks. The anterior thalamic lesions did not disrupt learning two biconditional associations in operant chambers where a specific auditory stimulus (tone or click) had a differential outcome depending on whether it was paired with a particular visual context (spot or checkered wall-paper) or a particular thermal context (warm or cool). Likewise, rats with anterior thalamic lesions successfully learnt a biconditional task when they were reinforced for digging in one of two distinct cups (containing either beads or shredded paper), depending on the particular appearance of the local context on which the cup was placed (one of two textured floors). In contrast, the same rats were severely impaired at learning the biconditional rule to select a specific cup when in a particular location within the test room. Place learning was then tested with a series of go/no-go discriminations. Rats with anterior thalamic nuclei lesions could learn to discriminate between two locations when they were approached from a constant direction. They could not, however, use this acquired location information to solve a subsequent spatial biconditional task where those same places dictated the correct choice of digging cup. Anterior thalamic lesions produced a selective, but severe, biconditional learning deficit when the task incorporated distal spatial cues. This deficit mirrors that seen in rats with hippocampal lesions, so extending potential interdependencies between the two sites. PMID:24215178

  2. Sex effects on spatial learning but not on spatial memory retrieval in healthy young adults.

    PubMed

    Piber, Dominique; Nowacki, Jan; Mueller, Sven C; Wingenfeld, Katja; Otte, Christian

    2017-08-25

    Sex differences have been found in spatial learning and spatial memory, with several studies indicating that males outperform females. We tested in the virtual Morris Water Maze (vMWM) task, whether sex differences in spatial cognitive processes are attributable to differences in spatial learning or spatial memory retrieval in a large student sample. We tested 90 healthy students (45 women and 45 men) with a mean age of 23.5 years (SD=3.5). Spatial learning and spatial memory retrieval were measured by using the vMWM task, during which participants had to search a virtual pool for a hidden platform, facilitated by visual cues surrounding the pool. Several learning trials assessed spatial learning, while a separate probe trial assessed spatial memory retrieval. We found a significant sex effect during spatial learning, with males showing shorter latency and shorter path length, as compared to females (all p<0.001). Yet, there was no significant sex effect in spatial memory retrieval (p=0.615). Furthermore, post-hoc analyses revealed significant sex differences in spatial search strategies (p<0.05), but no difference in the number of platform crossings (p=0.375). Our results indicate that in healthy young adults, males show faster spatial learning in a virtual environment, as compared to females. Interestingly, we found no significant sex differences during spatial memory retrieval. Our study raises the question, whether men and women use different learning strategies, which nevertheless result in equal performances of spatial memory retrieval. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. The effects of distraction and a brief intervention on auditory and visual-spatial working memory in college students with attention deficit hyperactivity disorder.

    PubMed

    Lineweaver, Tara T; Kercood, Suneeta; O'Keeffe, Nicole B; O'Brien, Kathleen M; Massey, Eric J; Campbell, Samantha J; Pierce, Jenna N

    2012-01-01

    Two studies addressed how young adult college students with attention deficit hyperactivity disorder (ADHD) (n = 44) compare to their nonaffected peers (n = 42) on tests of auditory and visual-spatial working memory (WM), are vulnerable to auditory and visual distractions, and are affected by a simple intervention. Students with ADHD demonstrated worse auditory WM than did controls. A near significant trend indicated that auditory distractions interfered with the visual WM of both groups and that, whereas controls were also vulnerable to visual distractions, visual distractions improved visual WM in the ADHD group. The intervention was ineffective. Limited correlations emerged between self-reported ADHD symptoms and objective test performances; students with ADHD who perceived themselves as more symptomatic often had better WM and were less vulnerable to distractions than their ADHD peers.

  4. Auditory Spatial Perception: Auditory Localization

    DTIC Science & Technology

    2012-05-01

    of the approach (or departure) of a low- altitude-flying and invisible (obscured by vegetation ) UH-1B helicopter. He reported an absolute mean...psichici. Archives Fisiologia 1911, 9, 523–574. [Cited by Gulick et al. (1989)]. Aharonson, V.; Furst, M.; Levine, R. A.; Chaigrecht, M.; Korczyn

  5. Auditory Spatial Perception: Auditory Localization

    DTIC Science & Technology

    2012-05-01

    Teas , D. C.; Jeffress, L. A. Localization of High Frequency Tones. Journal of the Acoustical Society of America 1957, 29, 988–991. Feinstein, S...Neurophysiology 2001, 86, 2647–2666. Itoh, M.; Adel, B. von; Kelly, J. B. Sound Localization after Transection of the Commissure of Probst in the Albino Rat...Neurology 1957, 7, 655–663. Sandel, T. T.; Teas , D. C.; Feddersen, W. E.; Jeffress, L. A. Localization of Sound From Single and Paired Sources. Journal

  6. Brain dynamics that correlate with effects of learning on auditory distance perception

    PubMed Central

    Wisniewski, Matthew G.; Mercado, Eduardo; Church, Barbara A.; Gramann, Klaus; Makeig, Scott

    2014-01-01

    Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m) and far (30-m) distances. Listeners' accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC) processes identified in electroencephalographic (EEG) data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4–8 Hz power (theta event-related synchronization; ERS) that was smaller after training and largest for backwards speech. For a left temporal cluster, 8–12 Hz decreases in power (alpha event-related desynchronization; ERD) were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10–16 Hz power (upper-alpha/low-beta ERS). The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance. PMID:25538550

  7. Spatial learning in men undergoing alcohol detoxification.

    PubMed

    Ceccanti, Mauro; Hamilton, Derek; Coriale, Giovanna; Carito, Valentina; Aloe, Luigi; Chaldakov, George; Romeo, Marina; Ceccanti, Marco; Iannitelli, Angela; Fiore, Marco

    2015-10-01

    Alcohol dependence is a major public health problem worldwide. Brain and behavioral disruptions including changes in cognitive abilities are common features of alcohol addiction. Thus, the present study was aimed to investigate spatial learning and memory in 29 alcoholic men undergoing alcohol detoxification by using a virtual Morris maze task. As age-matched controls we recruited 29 men among occasional drinkers without history of alcohol dependence and/or alcohol related diseases and with a negative blood alcohol level at the time of testing. We found that the responses to the virtual Morris maze are impaired in men undergoing alcohol detoxification. Notably they showed increased latencies in the first movement during the trials, increased latencies in retrieving the hidden platform and increased latencies in reaching the visible platform. These findings were associated with reduced swimming time in the target quadrant of the pool where the platform had been during the 4 hidden platform trials of the learning phase compared to controls. Such increasing latency responses may suggest motor control, attentional and motivational deficits due to alcohol detoxification.

  8. Auditory experience refines cortico-basal ganglia inputs to motor cortex via remapping of single axons during vocal learning in zebra finches.

    PubMed

    Miller-Sims, Vanessa C; Bottjer, Sarah W

    2012-02-01

    Experience-dependent changes in neural connectivity underlie developmental learning and result in life-long changes in behavior. In songbirds axons from the cortical region LMAN(core) (core region of lateral magnocellular nucleus of anterior nidopallium) convey the output of a basal ganglia circuit necessary for song learning to vocal motor cortex [robust nucleus of the arcopallium (RA)]. This axonal projection undergoes remodeling during the sensitive period for learning to achieve topographic organization. To examine how auditory experience instructs the development of connectivity in this pathway, we compared the morphology of individual LMAN(core)→RA axon arbors in normal juvenile songbirds to those raised in white noise. The spatial extent of axon arbors decreased during the first week of vocal learning, even in the absence of normal auditory experience. During the second week of vocal learning axon arbors of normal birds showed a loss of branches and varicosities; in contrast, experience-deprived birds showed no reduction in branches or varicosities and maintained some arbors in the wrong topographic location. Thus both experience-independent and experience-dependent processes are necessary to establish topographic organization in juvenile birds, which may allow birds to modify their vocal output in a directed manner and match their vocalizations to a tutor song. Many LMAN(core) axons of juvenile birds, but not adults, extended branches into dorsal arcopallium (Ad), a region adjacent to RA that is part of a parallel basal ganglia pathway also necessary for vocal learning. This transient projection provides a point of integration between the two basal ganglia pathways, suggesting that these branches convey corollary discharge signals as birds are actively engaged in learning.

  9. Beta phase synchronization in the frontal-temporal-cerebellar network during auditory-to-motor rhythm learning

    PubMed Central

    Edagawa, Kouki; Kawasaki, Masahiro

    2017-01-01

    Rhythm is an essential element of dancing and music. To investigate the neural mechanisms underlying how rhythm is learned, we recorded electroencephalographic (EEG) data during a rhythm-reproducing task that asked participants to memorize an auditory stimulus and reproduce it via tapping. Based on the behavioral results, we divided the participants into Learning and No-learning groups. EEG analysis showed that error-related negativity (ERN) in the Learning group was larger than in the No-learning group. Time-frequency analysis of the EEG data showed that the beta power in right and left temporal area at the late learning stage was smaller than at the early learning stage in the Learning group. Additionally, the beta power in the temporal and cerebellar areas in the Learning group when learning to reproduce the rhythm were larger than in the No Learning group. Moreover, phase synchronization between frontal and temporal regions and between temporal and cerebellar regions at late stages of learning were larger than at early stages. These results indicate that the frontal-temporal-cerebellar beta neural circuits might be related to auditory-motor rhythm learning. PMID:28225010

  10. Contrast Enhancement without Transient Map Expansion for Species-Specific Vocalizations in Core Auditory Cortex during Learning

    PubMed Central

    Shepard, Kathryn N.; Chong, Kelly K.

    2016-01-01

    Tonotopic map plasticity in the adult auditory cortex (AC) is a well established and oft-cited measure of auditory associative learning in classical conditioning paradigms. However, its necessity as an enduring memory trace has been debated, especially given a recent finding that the areal expansion of core AC tuned to a newly relevant frequency range may arise only transiently to support auditory learning. This has been reinforced by an ethological paradigm showing that map expansion is not observed for ultrasonic vocalizations (USVs) or for ultrasound frequencies in postweaning dams for whom USVs emitted by pups acquire behavioral relevance. However, whether transient expansion occurs during maternal experience is not known, and could help to reveal the generality of cortical map expansion as a correlate for auditory learning. We thus mapped the auditory cortices of maternal mice at postnatal time points surrounding the peak in pup USV emission, but found no evidence of frequency map expansion for the behaviorally relevant high ultrasound range in AC. Instead, regions tuned to low frequencies outside of the ultrasound range show progressively greater suppression of activity in response to the playback of ultrasounds or pup USVs for maternally experienced animals assessed at their pups’ postnatal day 9 (P9) to P10, or postweaning. This provides new evidence for a lateral-band suppression mechanism elicited by behaviorally meaningful USVs, likely enhancing their population-level signal-to-noise ratio. These results demonstrate that tonotopic map enlargement has limits as a construct for conceptualizing how experience leaves neural memory traces within sensory cortex in the context of ethological auditory learning. PMID:27957529

  11. Guidance of Spatial Attention by Incidental Learning and Endogenous Cuing

    ERIC Educational Resources Information Center

    Jiang, Yuhong V.; Swallow, Khena M.; Rosenbaum, Gail M.

    2013-01-01

    Our visual system is highly sensitive to regularities in the environment. Locations that were important in one's previous experience are often prioritized during search, even though observers may not be aware of the learning. In this study we characterized the guidance of spatial attention by incidental learning of a target's spatial probability,…

  12. Use of the Rey Auditory Verbal Learning Test in Differentiating Normal Aging from Alzheimer's and Parkinson's Dementia.

    ERIC Educational Resources Information Center

    Tierney, Mary C.; And Others

    1994-01-01

    Thirty-eight elderly control subjects performed better than did 18 patients with moderate Alzheimer's disease (AD), 33 with severe AD, and 12 with Parkinson's dementia on all measures of the Rey Auditory Verbal Learning Test. Results indicate that the test is useful in distinguishing AD from Parkinson's dementia. (SLD)

  13. Use of the Rey Auditory Verbal Learning Test in Differentiating Normal Aging from Alzheimer's and Parkinson's Dementia.

    ERIC Educational Resources Information Center

    Tierney, Mary C.; And Others

    1994-01-01

    Thirty-eight elderly control subjects performed better than did 18 patients with moderate Alzheimer's disease (AD), 33 with severe AD, and 12 with Parkinson's dementia on all measures of the Rey Auditory Verbal Learning Test. Results indicate that the test is useful in distinguishing AD from Parkinson's dementia. (SLD)

  14. EXEL; Experience for Children in Learning. Parent-Directed Activities to Develop: Oral Expression, Visual Discrimination, Auditory Discrimination, Motor Coordination.

    ERIC Educational Resources Information Center

    Behrmann, Polly; Millman, Joan

    The activities collected in this handbook are planned for parents to use with their children in a learning experience. They can also be used in the classroom. Sections contain games designed to develop visual discrimination, auditory discrimination, motor coordination and oral expression. An objective is given for each game, and directions for…

  15. Reduced Sensory Oscillatory Activity during Rapid Auditory Processing as a Correlate of Language-Learning Impairment

    PubMed Central

    Heim, Sabine; Friedman, Jennifer Thomas; Keil, Andreas; Benasich, April A.

    2010-01-01

    Successful language acquisition has been hypothesized to involve the ability to integrate rapidly presented, brief acoustic cues in sensory cortex. A body of work has suggested that this ability is compromised in language-learning impairment (LLI). The present research aimed to examine sensory integration during rapid auditory processing by means of electrophysiological measures of oscillatory brain activity using data from a larger longitudinal study. Twenty-nine children with LLI and control participants with typical language development (n=18) listened to tone doublets presented at a temporal interval that is essential for accurate speech processing (70-ms interstimulus interval). The children performed a deviant (pitch change of second tone) detection task, or listened passively. The electroencephalogram was recorded from 64 electrodes. Data were source-projected to the auditory cortices and submitted to wavelet analysis, resulting in time-frequency representations of electrocortical activity. Results show significantly reduced amplitude and phase-locking of early (45–75 ms) oscillations in the gamma-band range (29–52 Hz), specifically in the LLI group, for the second stimulus of the tone doublet. This suggests altered temporal organization of sensory oscillatory activity in LLI when processing rapid sequences. PMID:21822356

  16. Prior experience with negative spectral correlations promotes information integration during auditory category learning.

    PubMed

    Scharinger, Mathias; Henry, Molly J; Obleser, Jonas

    2013-07-01

    Complex sounds vary along a number of acoustic dimensions. These dimensions may exhibit correlations that are familiar to listeners due to their frequent occurrence in natural sounds-namely, speech. However, the precise mechanisms that enable the integration of these dimensions are not well understood. In this study, we examined the categorization of novel auditory stimuli that differed in the correlations of their acoustic dimensions, using decision bound theory. Decision bound theory assumes that stimuli are categorized on the basis of either a single dimension (rule based) or the combination of more than one dimension (information integration) and provides tools for assessing successful integration across multiple acoustic dimensions. In two experiments, we manipulated the stimulus distributions such that in Experiment 1, optimal categorization could be accomplished by either a rule-based or an information integration strategy, while in Experiment 2, optimal categorization was possible only by using an information integration strategy. In both experiments, the pattern of results demonstrated that unidimensional strategies were strongly preferred. Listeners focused on the acoustic dimension most closely related to pitch, suggesting that pitch-based categorization was given preference over timbre-based categorization. Importantly, in Experiment 2, listeners also relied on a two-dimensional information integration strategy, if there was immediate feedback. Furthermore, this strategy was used more often for distributions defined by a negative spectral correlation between stimulus dimensions, as compared with distributions with a positive correlation. These results suggest that prior experience with such correlations might shape short-term auditory category learning.

  17. Learning to modulate sensorimotor rhythms with stereo auditory feedback for a brain-computer interface.

    PubMed

    McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh

    2012-01-01

    Motor imagery can be used to modulate sensorimotor rhythms (SMR) enabling detection of voltage fluctuations on the surface of the scalp using electroencephalographic (EEG) electrodes. Feedback is essential in learning how to intentionally modulate SMR in non-muscular communication using a brain-computer interface (BCI). A BCI that is not reliant upon the visual modality for feedback is an attractive means of communication for the blind and the vision impaired and to release the visual channel for other purposes during BCI usage. The aim of this study is to demonstrate the feasibility of replacing the traditional visual feedback modality with stereo auditory feedback. Twenty participants split into equal groups took part in ten BCI sessions involving motor imagery. The visual feedback group performed best using two performance measures but did not show improvement over time whilst the auditory group improved as the study progressed. Multiple loudspeaker presentation of audio allows the listener to intuitively assign each of two classes to the corresponding lateral position in a free-field listening environment.

  18. Development of Critical Spatial Thinking through GIS Learning

    ERIC Educational Resources Information Center

    Kim, Minsung; Bednarz, Robert

    2013-01-01

    This study developed an interview-based critical spatial thinking oral test and used the test to investigate the effects of Geographic Information System (GIS) learning on three components of critical spatial thinking: evaluating data reliability, exercising spatial reasoning, and assessing problem-solving validity. Thirty-two students at a large…

  19. Development of Critical Spatial Thinking through GIS Learning

    ERIC Educational Resources Information Center

    Kim, Minsung; Bednarz, Robert

    2013-01-01

    This study developed an interview-based critical spatial thinking oral test and used the test to investigate the effects of Geographic Information System (GIS) learning on three components of critical spatial thinking: evaluating data reliability, exercising spatial reasoning, and assessing problem-solving validity. Thirty-two students at a large…

  20. Think3d!: Improving Mathematics Learning through Embodied Spatial Training

    ERIC Educational Resources Information Center

    Burte, Heather; Gardony, Aaron L.; Hutton, Allyson; Taylor, Holly A.

    2017-01-01

    Spatial thinking skills positively relate to Science, Technology, Engineering, and Math (STEM) outcomes, but spatial training is largely absent in elementary school. Elementary school is a time when children develop foundational cognitive skills that will support STEM learning throughout their education. Spatial thinking should be considered a…

  1. Think3d!: Improving mathematics learning through embodied spatial training.

    PubMed

    Burte, Heather; Gardony, Aaron L; Hutton, Allyson; Taylor, Holly A

    2017-01-01

    Spatial thinking skills positively relate to Science, Technology, Engineering, and Math (STEM) outcomes, but spatial training is largely absent in elementary school. Elementary school is a time when children develop foundational cognitive skills that will support STEM learning throughout their education. Spatial thinking should be considered a foundational cognitive skill. The present research examined the impact of an embodied spatial training program on elementary students' spatial and mathematical thinking. Students in rural elementary schools completed spatial and math assessments prior to and after participating in an origami and pop-up paper engineering-based program, called Think3d!. Think3d! uses embodied tasks, such as folding and cutting paper, to train two-dimensional to three-dimensional spatial thinking. Analyses explored spatial thinking gains, mathematics gains - specifically for problem types expected to show gains from spatial training - and factors predicting mathematics gains. Results showed spatial thinking gains in two assessments. Using a math categorization to target problems more and less likely to be impacted by spatial training, we found that all students improved on real-world math problems and older students improved on visual and spatial math problems. Further, the results are suggestive of developmental time points for implementing embodied spatial training related to applying spatial thinking to math. Finally, the spatial thinking assessment that was most highly related to training activities also predicted math performance gains. Future research should explore developmental issues related to how embodied spatial training might support STEM learning and outcomes.

  2. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    PubMed Central

    Pisoni, David B.

    2015-01-01

    Purpose Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method Experiments 1 (N = 37) and 2 (N = 21) used pre- and posttest measures of speech and nonspeech recognition to find evidence of learning (within subject) and to compare the effects of 3 kinds of training (between subject) on the perceptual abilities of adults with normal hearing listening to simulations of cochlear implant processing. Subjects were given interactive, standard lab-based, or control training experience for 1 hr between the pre- and posttest tasks (unique sets across Experiments 1 & 2). Results Subjects receiving interactive training showed significant learning on sentence recognition in quiet task (Experiment 1), outperforming controls but not lab-trained subjects following training. Training groups did not differ significantly on any other task, even those directly involved in the interactive training experience. Conclusions Interactive training has the potential to produce learning in 1 domain (sentence recognition in quiet), but the particulars of the present training method (short duration, high complexity) may have limited benefits to this single criterion task. PMID:25674884

  3. Auditory Magnetoencephalographic Frequency-Tagged Responses Mirror the Ongoing Segmentation Processes Underlying Statistical Learning.

    PubMed

    Farthouat, Juliane; Franco, Ana; Mary, Alison; Delpouve, Julie; Wens, Vincent; Op de Beeck, Marc; De Tiège, Xavier; Peigneux, Philippe

    2017-03-01

    Humans are highly sensitive to statistical regularities in their environment. This phenomenon, usually referred as statistical learning, is most often assessed using post-learning behavioural measures that are limited by a lack of sensibility and do not monitor the temporal dynamics of learning. In the present study, we used magnetoencephalographic frequency-tagged responses to investigate the neural sources and temporal development of the ongoing brain activity that supports the detection of regularities embedded in auditory streams. Participants passively listened to statistical streams in which tones were grouped as triplets, and to random streams in which tones were randomly presented. Results show that during exposure to statistical (vs. random) streams, tritone frequency-related responses reflecting the learning of regularities embedded in the stream increased in the left supplementary motor area and left posterior superior temporal sulcus (pSTS), whereas tone frequency-related responses decreased in the right angular gyrus and right pSTS. Tritone frequency-related responses rapidly developed to reach significance after 3 min of exposure. These results suggest that the incidental extraction of novel regularities is subtended by a gradual shift from rhythmic activity reflecting individual tone succession toward rhythmic activity synchronised with triplet presentation, and that these rhythmic processes are subtended by distinct neural sources.

  4. Oscillatory alpha-band mechanisms and the deployment of spatial attention to anticipated auditory and visual target locations: Supramodal or sensory-specific control mechanisms?

    PubMed Central

    Banerjee, Snigdha; Snyder, Adam C.; Molholm, Sophie; Foxe, John J.

    2011-01-01

    Oscillatory alpha-band activity (8–15 Hz) over parieto-occipital cortex in humans plays an important role in suppression of processing for inputs at to-be-ignored regions of space, with increased alpha-band power observed over cortex contralateral to locations expected to contain distractors. It is unclear if similar processes operate during deployment of spatial attention in other sensory modalities. Evidence from lesion patients suggests that parietal regions house supramodal representations of space. The parietal lobes are prominent generators of alpha-oscillations; raising the possibility that alpha is a neural signature of supramodal spatial attention. Further, when spatial attention is deployed within vision, processing of task-irrelevant auditory inputs at attended locations is also enhanced, pointing to automatic links between spatial deployments across senses. Here, we asked whether lateralized alpha-band activity is also evident in a purely auditory spatial-cueing task, and whether it had the same underlying generator configuration as in a purely visuo-spatial task. If common to both sensory-systems, this would provide strong support for “supramodal” attention theory. Alternately, alpha-band differences between auditory and visual tasks would support a sensory-specific account. Lateralized shifts in alpha-band activity were indeed observed during a purely auditory-spatial task. Crucially, there were clear differences in scalp topographies of this alpha-activity depending on the sensory system within which spatial attention was deployed. Findings suggest that parietally-generated alpha-band mechanisms are central to attentional deployments across modalities but that they are invoked in a sensory-specific manner. The data support an interactivity account, whereby a supramodal system interacts with sensory-specific control systems during deployment of spatial attention. PMID:21734284

  5. Age-dependent impairment of auditory processing under spatially focused and divided attention: an electrophysiological study.

    PubMed

    Wild-Wall, Nele; Falkenstein, Michael

    2010-01-01

    By using event-related potentials (ERPs) the present study examines if age-related differences in preparation and processing especially emerge during divided attention. Binaurally presented auditory cues called for focused (valid and invalid) or divided attention to one or both ears. Responses were required to subsequent monaurally presented valid targets (vowels), but had to be suppressed to non-target vowels or invalidly cued vowels. Middle-aged participants were more impaired under divided attention than young ones, likely due to an age-related decline in preparatory attention following cues as was reflected in a decreased CNV. Under divided attention, target processing was increased in the middle-aged, likely reflecting compensatory effort to fulfill task requirements in the difficult condition. Additionally, middle-aged participants processed invalidly cued stimuli more intensely as was reflected by stimulus ERPs. The results suggest an age-related impairment in attentional preparation after auditory cues especially under divided attention and latent difficulties to suppress irrelevant information.

  6. Accurate sound localization in reverberant environments is mediated by robust encoding of spatial cues in the auditory midbrain.

    PubMed

    Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand

    2009-04-16

    In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener's ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments.

  7. Enhanced auditory spatial performance using individualized head-related transfer functions: An event-related potential study.

    PubMed

    Wisniewski, Matthew G; Romigh, Griffin D; Kenzig, Stephanie M; Iyer, Nandini; Simpson, Brian D; Thompson, Eric R; Rothwell, Clayton D

    2016-12-01

    This study examined event-related potential (ERP) correlates of auditory spatial benefits gained from rendering sounds with individualized head-related transfer functions (HRTFs). Noise bursts with identical virtual elevations (0°-90°) were presented back-to-back in 5-10 burst "runs" in a roving oddball paradigm. Detection of a run's start (i.e., elevation change detection) was enhanced when bursts were rendered with an individualized compared to a non-individualized HRTF. ERPs showed increased P3 amplitudes to first bursts of a run in the individualized HRTF condition. Condition differences in P3 amplitudes and behavior were positively correlated. Data suggests that part of the individualization benefit reflects post-sensory processes.

  8. Proteome rearrangements after auditory learning: high-resolution profiling of synapse-enriched protein fractions from mouse brain.

    PubMed

    Kähne, Thilo; Richter, Sandra; Kolodziej, Angela; Smalla, Karl-Heinz; Pielot, Rainer; Engler, Alexander; Ohl, Frank W; Dieterich, Daniela C; Seidenbecher, Constanze; Tischmeyer, Wolfgang; Naumann, Michael; Gundelfinger, Eckart D

    2016-07-01

    Learning and memory processes are accompanied by rearrangements of synaptic protein networks. While various studies have demonstrated the regulation of individual synaptic proteins during these processes, much less is known about the complex regulation of synaptic proteomes. Recently, we reported that auditory discrimination learning in mice is associated with a relative down-regulation of proteins involved in the structural organization of synapses in various brain regions. Aiming at the identification of biological processes and signaling pathways involved in auditory memory formation, here, a label-free quantification approach was utilized to identify regulated synaptic junctional proteins and phosphoproteins in the auditory cortex, frontal cortex, hippocampus, and striatum of mice 24 h after the learning experiment. Twenty proteins, including postsynaptic scaffolds, actin-remodeling proteins, and RNA-binding proteins, were regulated in at least three brain regions pointing to common, cross-regional mechanisms. Most of the detected synaptic proteome changes were, however, restricted to individual brain regions. For example, several members of the Septin family of cytoskeletal proteins were up-regulated only in the hippocampus, while Septin-9 was down-regulated in the hippocampus, the frontal cortex, and the striatum. Meta analyses utilizing several databases were employed to identify underlying cellular functions and biological pathways. Data are available via ProteomeExchange with identifier PXD003089. How does the protein composition of synapses change in different brain areas upon auditory learning? We unravel discrete proteome changes in mouse auditory cortex, frontal cortex, hippocampus, and striatum functionally implicated in the learning process. We identify not only common but also area-specific biological pathways and cellular processes modulated 24 h after training, indicating individual contributions of the regions to memory processing.

  9. Auditory and Visual Working Memory Functioning in College Students with Attention-Deficit/Hyperactivity Disorder and/or Learning Disabilities.

    PubMed

    Liebel, Spencer W; Nelson, Jason M

    2017-02-07

    We investigated the auditory and visual working memory functioning in college students with attention-deficit/hyperactivity disorder, learning disabilities, and clinical controls. We examined the role attention-deficit/hyperactivity disorder subtype status played in working memory functioning. The unique influence that both domains of working memory have on reading and math abilities was investigated. A sample of 268 individuals seeking postsecondary education comprise four groups of the present study: 110 had an attention-deficit/hyperactivity disorder diagnosis only, 72 had a learning disability diagnosis only, 35 had comorbid attention-deficit/hyperactivity disorder and learning disability diagnoses, and 60 individuals without either of these disorders comprise a clinical control group. Participants underwent a comprehensive neuropsychological evaluation, and licensed psychologists employed a multi-informant, multi-method approach in obtaining diagnoses. In the attention-deficit/hyperactivity disorder only group, there was no difference between auditory and visual working memory functioning, t(100) = -1.57, p = .12. In the learning disability group, however, auditory working memory functioning was significantly weaker compared with visual working memory, t(71) = -6.19, p < .001, d = -0.85. Within the attention-deficit/hyperactivity disorder only group, there were no auditory or visual working memory functioning differences between participants with either a predominantly inattentive type or a combined type diagnosis. Visual working memory did not incrementally contribute to the prediction of academic achievement skills. Individuals with attention-deficit/hyperactivity disorder did not demonstrate significant working memory differences compared with clinical controls. Individuals with a learning disability demonstrated weaker auditory working memory than individuals in either the attention-deficit/hyperactivity or clinical control groups.

  10. Prenatal complex rhythmic music sound stimulation facilitates postnatal spatial learning but transiently impairs memory in the domestic chick.

    PubMed

    Kauser, H; Roy, S; Pal, A; Sreenivas, V; Mathur, R; Wadhwa, S; Jain, S

    2011-01-01

    Early experience has a profound influence on brain development, and the modulation of prenatal perceptual learning by external environmental stimuli has been shown in birds, rodents and mammals. In the present study, the effect of prenatal complex rhythmic music sound stimulation on postnatal spatial learning, memory and isolation stress was observed. Auditory stimulation with either music or species-specific sounds or no stimulation (control) was provided to separate sets of fertilized eggs from day 10 of incubation. Following hatching, the chicks at age 24, 72 and 120 h were tested on a T-maze for spatial learning and the memory of the learnt task was assessed 24 h after training. In the posthatch chicks at all ages, the plasma corticosterone levels were estimated following 10 min of isolation. The chicks of all ages in the three groups took less (p < 0.001) time to navigate the maze over the three trials thereby showing an improvement with training. In both sound-stimulated groups, the total time taken to reach the target decreased significantly (p < 0.01) in comparison to the unstimulated control group, indicating the facilitation of spatial learning. However, this decline was more at 24 h than at later posthatch ages. When tested for memory after 24 h of training, only the music-stimulated chicks at posthatch age 24 h took a significantly longer (p < 0.001) time to traverse the maze, suggesting a temporary impairment in their retention of the learnt task. In both sound-stimulated groups at 24 h, the plasma corticosterone levels were significantly decreased (p < 0.001) and increased thereafter at 72 h (p < 0.001) and 120 h which may contribute to the differential response in spatial learning. Thus, prenatal auditory stimulation with either species-specific or complex rhythmic music sounds facilitates spatial learning, though the music stimulation transiently impairs postnatal memory. 2011 S. Karger AG, Basel.

  11. Extending and Applying the EPIC Architecture for Human Cognition and Performance: Auditory and Spatial Components

    DTIC Science & Technology

    2016-03-01

    exploring improvements to the stream tracking mechanism. Goal12. We extended the model to situations in which the speakers are spatially separated...content detection and stream tracking accounts for these effects . As summary measures of goodness of fit , r 2 = 0.99 between predicted and observed...perceived spatial location. Spatia/location effects with two talkers. Most of the available studies have manipulated spatial location only in the

  12. Base-rate data and norms for the Rey Auditory Verbal Learning Embedded Performance Validity Indicator.

    PubMed

    Poreh, Amir; Tolfo, Sarah; Krivenko, Anna; Teaford, Max

    2016-08-25

    The present study examines the Rey Auditory Verbal Learning Test (RAVLT) Embedded Performance Validity Indicator (EPVI) for detecting performance validity. This retrospective study analyzes the performance of four groups of 879 participants comprised of 464 clinically referred patients with suspected dementia, 91 forensic patients identified as not exhibiting adequate effort on other measures of response bias, 25 patients with well documented TBI, and a random sample of 198 adults collected in the Gulf State of Oman. The EPVI was also put to the test using normative data collected from the literature. Using sensitivity and specificity analyses, the results indicate moderate to high sensitivity yet low specificity. In conclusion, the study shows that the EPVI is a reasonably good indicator for inadequate effort on the RAVLT but those who fail this measure might not necessarily be exhibiting adequate effort. The limitations and benefits of utilizing the EPVI in clinical practice are discussed.

  13. Learning to match auditory and visual speech cues: social influences on acquisition of phonological categories.

    PubMed

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential looking paradigm, 44 German 6-month olds' ability to detect mismatches between concurrently presented auditory and visual native vowels was tested. Outcomes were related to mothers' speech style and interactive behavior assessed during free play with their infant, and to infant-specific factors assessed through a questionnaire. Results show that mothers' and infants' social behavior modulated infants' preference for matching audiovisual speech. Moreover, infants' audiovisual speech perception correlated with later vocabulary size, suggesting a lasting effect on language development.

  14. Auditory Perceptual Learning in Adults with and without Age-Related Hearing Loss

    PubMed Central

    Karawani, Hanin; Bitan, Tali; Attias, Joseph; Banai, Karen

    2016-01-01

    Introduction : Speech recognition in adverse listening conditions becomes more difficult as we age, particularly for individuals with age-related hearing loss (ARHL). Whether these difficulties can be eased with training remains debated, because it is not clear whether the outcomes are sufficiently general to be of use outside of the training context. The aim of the current study was to compare training-induced learning and generalization between normal-hearing older adults and those with ARHL. Methods : Fifty-six listeners (60–72 y/o), 35 participants with ARHL, and 21 normal hearing adults participated in the study. The study design was a cross over design with three groups (immediate-training, delayed-training, and no-training group). Trained participants received 13 sessions of home-based auditory training over the course of 4 weeks. Three adverse listening conditions were targeted: (1) Speech-in-noise, (2) time compressed speech, and (3) competing speakers, and the outcomes of training were compared between normal and ARHL groups. Pre- and post-test sessions were completed by all participants. Outcome measures included tests on all of the trained conditions as well as on a series of untrained conditions designed to assess the transfer of learning to other speech and non-speech conditions. Results : Significant improvements on all trained conditions were observed in both ARHL and normal-hearing groups over the course of training. Normal hearing participants learned more than participants with ARHL in the speech-in-noise condition, but showed similar patterns of learning in the other conditions. Greater pre- to post-test changes were observed in trained than in untrained listeners on all trained conditions. In addition, the ability of trained listeners from the ARHL group to discriminate minimally different pseudowords in noise also improved with training. Conclusions : ARHL did not preclude auditory perceptual learning but there was little generalization to

  15. Modeling speech imitation and ecological learning of auditory-motor maps

    PubMed Central

    Canevari, Claudia; Badino, Leonardo; D'Ausilio, Alessandro; Fadiga, Luciano; Metta, Giorgio

    2013-01-01

    Classical models of speech consider an antero-posterior distinction between perceptive and productive functions. However, the selective alteration of neural activity in speech motor centers, via transcranial magnetic stimulation, was shown to affect speech discrimination. On the automatic speech recognition (ASR) side, the recognition systems have classically relied solely on acoustic data, achieving rather good performance in optimal listening conditions. The main limitations of current ASR are mainly evident in the realistic use of such systems. These limitations can be partly reduced by using normalization strategies that minimize inter-speaker variability by either explicitly removing speakers' peculiarities or adapting different speakers to a reference model. In this paper we aim at modeling a motor-based imitation learning mechanism in ASR. We tested the utility of a speaker normalization strategy that uses motor representations of speech and compare it with strategies that ignore the motor domain. Specifically, we first trained a regressor through state-of-the-art machine learning techniques to build an auditory-motor mapping, in a sense mimicking a human learner that tries to reproduce utterances produced by other speakers. This auditory-motor mapping maps the speech acoustics of a speaker into the motor plans of a reference speaker. Since, during recognition, only speech acoustics are available, the mapping is necessary to “recover” motor information. Subsequently, in a phone classification task, we tested the system on either one of the speakers that was used during training or a new one. Results show that in both cases the motor-based speaker normalization strategy slightly but significantly outperforms all other strategies where only acoustics is taken into account. PMID:23818883

  16. Jumpstarting auditory learning in children with cochlear implants through music experiences.

    PubMed

    Barton, Christine; Robbins, Amy McConkey

    2015-09-01

    Musical experiences are a valuable part of the lives of children with cochlear implants (CIs). In addition to the pleasure, relationships and emotional outlet provided by music, it serves to enhance or 'jumpstart' other auditory and cognitive skills that are critical for development and learning throughout the lifespan. Musicians have been shown to be 'better listeners' than non-musicians with regard to how they perceive and process sound. A heuristic model of music therapy is reviewed, including six modulating factors that may account for the auditory advantages demonstrated by those who participate in music therapy. The integral approach to music therapy is described along with the hybrid approach to pediatric language intervention. These approaches share the characteristics of placing high value on ecologically valid therapy experiences, i.e., engaging in 'real' music and 'real' communication. Music and language intervention techniques used by the authors are presented. It has been documented that children with CIs consistently have lower music perception scores than do their peers with normal hearing (NH). On the one hand, this finding matters a great deal because it provides parameters for setting reasonable expectations and highlights the work still required to improve signal processing with the devices so that they more accurately transmit music to CI listeners. On the other hand, the finding might not matter much if we assume that music, even in its less-than-optimal state, functions for CI children, as for NH children, as a developmental jumpstarter, a language-learning tool, a cognitive enricher, a motivator, and an attention enhancer.

  17. Discrimination of time intervals presented in sequences: spatial effects with multiple auditory sources.

    PubMed

    Grondin, Simon; Plourde, Marilyn

    2007-10-01

    This article discusses two experiments on the discrimination of time intervals presented in sequences marked by brief auditory signals. Participants had to indicate whether the last interval in a series of three intervals marked by four auditory signals was shorter or longer than the previous intervals. Three base durations were under investigation: 75, 150, and 225 ms. In Experiment 1, sounds were presented through headphones, from a single-speaker in front of the participants or by four equally spaced speakers. In all three presentation modes, the highest different threshold was obtained in the lower base duration condition (75 ms), thus indicating an impairment of temporal processing when sounds are presented too rapidly. The results also indicate the presence, in each presentation mode, of a 'time-shrinking effect' (i.e., with the last interval being perceived as briefer than the preceding ones) at 75 ms, but not at 225 ms. Lastly, using different sound sources to mark time did not significantly impair discrimination. In Experiment 2, three signals were presented from the same source, and the last signal was presented at one of two locations, either close or far. The perceived duration was not influenced by the location of the fourth signal when the participant knew before each trial where the sounds would be delivered. However, when the participant was uncertain as to its location, more space between markers resulted in longer perceived duration, a finding that applies only at 150 and 225 ms. Moreover, the perceived duration was affected by the direction of the sequences (left-right vs. right-left).

  18. Sensory Processing of Backward-Masking Signals in Children with Language-Learning Impairment as Assessed with the Auditory Brainstem Response.

    ERIC Educational Resources Information Center

    Marler, Jeffrey A.; Champlin, Craig A.

    2005-01-01

    The purpose of this study was to examine the possible contribution of sensory mechanisms to an auditory processing deficit shown by some children with language-learning impairment (LLI). Auditory brainstem responses (ABRs) were measured from 2 groups of school-aged (8-10 years) children. One group consisted of 10 children with LLI, and the other…

  19. The Effect of Learning Modality and Auditory Feedback on Word Memory: Cochlear-Implanted versus Normal-Hearing Adults.

    PubMed

    Taitelbaum-Swead, Riki; Icht, Michal; Mama, Yaniv

    2017-03-01

    In recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks. The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers. A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice-once with the implant ON and once with it OFF. All conditions were followed by free recall tests. Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group. For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions. With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers. The

  20. The Effects of Restricted Peripheral Field-of-View on Spatial Learning while Navigating

    PubMed Central

    2016-01-01

    Recent work with simulated reductions in visual acuity and contrast sensitivity has found decrements in survey spatial learning as well as increased attentional demands when navigating, compared to performance with normal vision. Given these findings, and previous work showing that peripheral field loss has been associated with impaired mobility and spatial memory for room-sized spaces, we investigated the role of peripheral vision during navigation using a large-scale spatial learning paradigm. First, we aimed to establish the magnitude of spatial memory errors at different levels of field restriction. Second, we tested the hypothesis that navigation under these different levels of restriction would use additional attentional resources. Normally sighted participants walked on novel real-world paths wearing goggles that restricted the field-of-view (FOV) to severe (15°, 10°, 4°, or 0°) or mild angles (60°) and then pointed to remembered target locations using a verbal reporting measure. They completed a concurrent auditory reaction time task throughout each path to measure cognitive load. Only the most severe restrictions (4° and blindfolded) showed impairment in pointing error compared to the mild restriction (within-subjects). The 10° and 4° conditions also showed an increase in reaction time on the secondary attention task, suggesting that navigating with these extreme peripheral field restrictions demands the use of limited cognitive resources. This comparison of different levels of field restriction suggests that although peripheral field loss requires the actor to use more attentional resources while navigating starting at a less extreme level (10°), spatial memory is not negatively affected until the restriction is very severe (4°). These results have implications for understanding of the mechanisms underlying spatial learning during navigation and the approaches that may be taken to develop assistance for navigation with visual impairment. PMID

  1. Blocking c-Fos Expression Reveals the Role of Auditory Cortex Plasticity in Sound Frequency Discrimination Learning.

    PubMed

    de Hoz, Livia; Gierej, Dorota; Lioudyno, Victoria; Jaworski, Jacek; Blazejczyk, Magda; Cruces-Solís, Hugo; Beroun, Anna; Lebitko, Tomasz; Nikolaev, Tomasz; Knapska, Ewelina; Nelken, Israel; Kaczmarek, Leszek

    2017-03-17

    The behavioral changes that comprise operant learning are associated with plasticity in early sensory cortices as well as with modulation of gene expression, but the connection between the behavioral, electrophysiological, and molecular changes is only partially understood. We specifically manipulated c-Fos expression, a hallmark of learning-induced synaptic plasticity, in auditory cortex of adult mice using a novel approach based on RNA interference. Locally blocking c-Fos expression caused a specific behavioral deficit in a sound discrimination task, in parallel with decreased cortical experience-dependent plasticity, without affecting baseline excitability or basic auditory processing. Thus, c-Fos-dependent experience-dependent cortical plasticity is necessary for frequency discrimination in an operant behavioral task. Our results connect behavioral, molecular and physiological changes and demonstrate a role of c-Fos in experience-dependent plasticity and learning.

  2. Neural responses in songbird forebrain reflect learning rates, acquired salience, and stimulus novelty after auditory discrimination training

    PubMed Central

    Phan, Mimi L.; Vicario, David S.

    2014-01-01

    How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions. PMID:25475353

  3. Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.

    PubMed

    François, Clément; Schön, Daniele

    2014-02-01

    There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations.

  4. Neural responses in songbird forebrain reflect learning rates, acquired salience, and stimulus novelty after auditory discrimination training.

    PubMed

    Bell, Brittany A; Phan, Mimi L; Vicario, David S

    2015-03-01

    How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions.

  5. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.

    PubMed

    Rutkowski, Tomasz M

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.

  6. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms

    PubMed Central

    Rutkowski, Tomasz M.

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538

  7. A machine learning approach for distinguishing age of infants using auditory evoked potentials.

    PubMed

    Ravan, Maryam; Reilly, James P; Trainor, Laurel J; Khodayari-Rostamabad, Ahmad

    2011-11-01

    To develop a high performance machine learning (ML) approach for predicting the age and consequently the state of brain development of infants, based on their event related potentials (ERPs) in response to an auditory stimulus. The ERP responses of twenty-nine 6-month-olds, nineteen 12-month-olds and 10 adults to an auditory stimulus were derived from electroencephalogram (EEG) recordings. The most relevant wavelet coefficients corresponding to the first- and second-order moment sequences of the ERP signals were then identified using a feature selection scheme that made no a priori assumptions about the features of interest. These features are then fed into a classifier for determination of age group. We verified that ERP data could yield features that discriminate the age group of individual subjects with high reliability. A low dimensional representation of the selected feature vectors show significant clustering behavior corresponding to the subject age group. The performance of the proposed age group prediction scheme was evaluated using the leave-one-out cross validation method and found to exceed 90% accuracy. This study indicates that ERP responses to an acoustic stimulus can be used to predict the age and consequently the state of brain development of infants. This study is of fundamental scientific significance in demonstrating that a machine classification algorithm with no a priori assumptions can classify ERP responses according to age and with further work, potentially provide useful clues in the understanding of the development of the human brain. A potential clinical use for the proposed methodology is the identification of developmental delay: an abnormal condition may be suspected if the age estimated by the proposed technique is significantly less than the chronological age of the subject. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Perceptual learning evidence for tuning to spectro-temporal modulation in the human auditory system

    PubMed Central

    Sabin, Andrew T.; Eddins, David A.; Wright, Beverly A.

    2012-01-01

    Natural sounds are characterized by complex patterns of sound intensity distributed across both frequency (spectral modulation) and time (temporal modulation). Perception of these patterns has been proposed to depend upon a bank of modulation filters, each tuned to a unique combination of a spectral and a temporal modulation frequency. There is considerable physiological evidence for such combined spectro-temporal tuning. However direct behavioral evidence is lacking. Here we examined the processing of spectro-temporal modulation behaviorally using a perceptual-learning paradigm. We trained human listeners ~1 hr/day for 7 days to discriminate the depth of either spectral (0.5 cyc/oct; 0 Hz), temporal (0 cyc/oct; 32 Hz), or upward spectro-temporal (0.5 cyc/oct; 32 Hz) modulation. Each trained group learned more on their respective trained condition than controls who received no training. Critically, this depth-discrimination learning did not generalize to the trained stimuli of the other groups or to downward spectro-temporal (0.5 cyc/oct; −32 Hz) modulation. Learning on discrimination also led to worsening on modulation detection, but only when the same spectro-temporal modulation was used for both tasks. Thus, these influences of training were specific to the trained combination of spectral and temporal modulation frequencies, even when the trained and untrained stimuli had one modulation frequency in common. This specificity indicates that training modified circuitry that had combined spectro-temporal tuning, and therefore that circuits with such tuning can influence perception. These results are consistent with the possibility that the auditory system analyzes sounds through filters tuned to combined spectro-temporal modulation. PMID:22573676

  9. Auditory learning through active engagement with sound: biological impact of community music lessons in at-risk children

    PubMed Central

    Kraus, Nina; Slater, Jessica; Thompson, Elaine C.; Hornickel, Jane; Strait, Dana L.; Nicol, Trent; White-Schwoch, Travis

    2014-01-01

    The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the

  10. Auditory learning through active engagement with sound: biological impact of community music lessons in at-risk children.

    PubMed

    Kraus, Nina; Slater, Jessica; Thompson, Elaine C; Hornickel, Jane; Strait, Dana L; Nicol, Trent; White-Schwoch, Travis

    2014-01-01

    The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the

  11. The role of expectation and probabilistic learning in auditory boundary perception: a model comparison.

    PubMed

    Pearce, Marcus T; Müllensiefen, Daniel; Wiggins, Geraint A

    2010-01-01

    Grouping and boundary perception are central to many aspects of sensory processing in cognition. We present a comparative study of recently published computational models of boundary perception in music. In doing so, we make three contributions. First, we hypothesise a relationship between expectation and grouping in auditory perception, and introduce a novel information-theoretic model of perceptual segmentation to test the hypothesis. Although we apply the model to musical melody, it is applicable in principle to sequential grouping in other areas of cognition. Second, we address a methodological consideration in the analysis of ambiguous stimuli that produce different percepts between individuals. We propose and demonstrate a solution to this problem, based on clustering of participants prior to analysis. Third, we conduct the first comparative analysis of probabilistic-learning and rule-based models of perceptual grouping in music. In spite of having only unsupervised exposure to music, the model performs comparably to rule-based models based on expert musical knowledge, supporting a role for probabilistic learning in perceptual segmentation of music.

  12. Service learning in auditory rehabilitation courses: the University of Texas at Dallas.

    PubMed

    Cokely, Carol G; Thibodeau, Linda M

    2011-12-01

    The aim of this work was to review service learning (SL) principles and its implementation into the auditory rehabilitation (AR) curriculum at the University of Texas at Dallas and to evaluate the courses to determine whether potential benefits of SL are worth the substantial time commitment and course restructuring. Via retrospective review, student outcomes for 25 students from 3 cohorts who completed the adult AR course prior to implementation of SL curriculum (pre-SL) were compared with those of 28 students from 3 SL cohorts. Data included final examination grades, ratings for overall course content, amount learned, clarity of responsibility, workload, relevance, and course comments. Student journals from the SL group and mentor surveys also were reviewed. The majority of student outcomes were comparable for pre-SL and SL cohorts. Clarity of responsibility and workload were rated lower for SL courses than for pre-SL classes, with medium and small to medium effect sizes, respectively. Mentors rated the projects and process of high value and benefit, and several projects remain in use beyond the end of the course. Continued use of an SL approach is supported, but additional guidance for students is needed for reflection and project analysis.

  13. Repeated acquisition and performance chamber for mice: a paradigm for assessment of spatial learning and memory.

    PubMed

    Brooks, A I; Cory-Slechta, D A; Murg, S L; Federoff, H J

    2000-11-01

    Molecular genetic manipulation of the mouse offers the possibility of elucidating the function of individual gene products in neural systems underlying learning and memory. Many extant learning paradigms for mice rely on negative reinforcement, involve simple problems that are relatively rapidly acquired and thus preclude time-course assessment, and may impose the need to undertake additional experiments to determine the extent to which noncognitive behaviors influence the measures of learning. To overcome such limitations, a multiple schedule of repeated acquisition and performance was behaviorally engineered to assess learning vs rote performance within-behavioral test session and within-subject utilizing an apparatus modified from the rat (the repeated acquisition and performance chamber; RAPC). The multiple schedule required mice to learn a new sequence of door openings leading to saccharin availability in the learning component during each session, while the sequence of door openings for the performance component remained constant across sessions. The learning and performance components alternated over the course of each test session, with different auditory stimuli signaling which component was currently in effect. To validate this paradigm, learning vs performance was evaluated in two inbred strains of mice: C57BL/6J and 129/SvJ. The hippocampal dependence of this measure was examined in lesioned C57BL/6J mice. Both strains exhibited longer latencies and higher errors in the learning compared to the performance component and evidenced declines in both measures across the trials of each session, consistent with an acquisition phenomenon. These same measures showed little or no evidence of change in the performance component. Whereas three trials per session were utilized with C57BL/65 mice in each component, behavior of 129/SvJ mice could only be sustained for two trials per component per session, demonstrating differences in testing capabilities between

  14. The role of spatial abilities and age in performance in an auditory computer navigation task.

    PubMed

    Pak, Richard; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D

    2006-01-01

    Age-related differences in spatial ability have been suggested as a mediator of age-related differences in computer-based task performance. However, the vast majority of tasks studied have primarily used a visual display (e.g., graphical user interfaces). In the current study, the relationship between spatial ability and performance in a non-visual computer-based navigation task was examined in a sample of 196 participants ranging in age from 18 to 91. Participants called into a simulated interactive voice response system and carried out a variety of transactions. They also completed measures of attention, working memory, and spatial abilities. The results showed that age-related differences in spatial ability predicted a significant amount of variance in performance in the non-visual computer task, even after controlling for other abilities. Understanding the abilities that influence performance with technology may provide insight into the source of age-related performance differences in the successful use of technology.

  15. The role of spatial abilities and age in performance in an auditory computer navigation task

    PubMed Central

    Pak, Richard; Czaja, Sara J.; Sharit, Joseph; Rogers, Wendy A.; Fisk, Arthur D.

    2008-01-01

    Age-related differences in spatial ability have been suggested as a mediator of age-related differences in computer-based task performance. However, the vast majority of tasks studied have primarily used a visual display (e.g., graphical user interfaces). In the current study, the relationship between spatial ability and performance in a non-visual computer-based navigation task was examined in a sample of 196 participants ranging in age from 18 to 91. Participants called into a simulated interactive voice response system and carried out a variety of transactions. They also completed measures of attention, working memory, and spatial abilities. The results showed that age-related differences in spatial ability predicted a significant amount of variance in performance in the non-visual computer task, even after controlling for other abilities. Understanding the abilities that influence performance with technology may provide insight into the source of age-related performance differences in the successful use of technology. PMID:18997876

  16. Learning To Make Music Enhances Spatial Reasoning.

    ERIC Educational Resources Information Center

    Hetland, Lois

    2000-01-01

    Examines whether active instruction in music enhances preschool and elementary school student performance on spatial tasks. Reports that music enhances the spatial-temporal performance of children during and up to two years following the instruction and that the effect is moderate and consistent. Includes references. (CMK)

  17. Learning To Make Music Enhances Spatial Reasoning.

    ERIC Educational Resources Information Center

    Hetland, Lois

    2000-01-01

    Examines whether active instruction in music enhances preschool and elementary school student performance on spatial tasks. Reports that music enhances the spatial-temporal performance of children during and up to two years following the instruction and that the effect is moderate and consistent. Includes references. (CMK)

  18. Spatial Ability Learning through Educational Robotics

    ERIC Educational Resources Information Center

    Julià, Carme; Antolí, Juan Òscar

    2016-01-01

    Several authors insist on the importance of students' acquisition of spatial abilities and visualization in order to have academic success in areas such as science, technology or engineering. This paper proposes to discuss and analyse the use of educational robotics to develop spatial abilities in 12 year old students. First of all, a course to…

  19. Spatial Ability Learning through Educational Robotics

    ERIC Educational Resources Information Center

    Julià, Carme; Antolí, Juan Òscar

    2016-01-01

    Several authors insist on the importance of students' acquisition of spatial abilities and visualization in order to have academic success in areas such as science, technology or engineering. This paper proposes to discuss and analyse the use of educational robotics to develop spatial abilities in 12 year old students. First of all, a course to…

  20. Rectangular Array Model Supporting Students' Spatial Structuring in Learning Multiplication

    ERIC Educational Resources Information Center

    Shanty, Nenden Octavarulia; Wijaya, Surya

    2012-01-01

    We examine how rectangular array model can support students' spatial structuring in learning multiplication. To begin, we define what we mean by spatial structuring as the mental operation of constructing an organization or form for an object or set of objects. For that reason, the eggs problem was chosen as the starting point in which the…

  1. Costs of switching auditory spatial attention in following conversational turn-taking

    PubMed Central

    Lin, Gaven; Carlile, Simon

    2015-01-01

    Following a multi-talker conversation relies on the ability to rapidly and efficiently shift the focus of spatial attention from one talker to another. The current study investigated the listening costs associated with shifts in spatial attention during conversational turn-taking in 16 normally-hearing listeners using a novel sentence recall task. Three pairs of syntactically fixed but semantically unpredictable matrix sentences, recorded from a single male talker, were presented concurrently through an array of three loudspeakers (directly ahead and +/−30° azimuth). Subjects attended to one spatial location, cued by a tone, and followed the target conversation from one sentence to the next using the call-sign at the beginning of each sentence. Subjects were required to report the last three words of each sentence (speech recall task) or answer multiple choice questions related to the target material (speech comprehension task). The reading span test, attention network test, and trail making test were also administered to assess working memory, attentional control, and executive function. There was a 10.7 ± 1.3% decrease in word recall, a pronounced primacy effect, and a rise in masker confusion errors and word omissions when the target switched location between sentences. Switching costs were independent of the location, direction, and angular size of the spatial shift but did appear to be load dependent and only significant for complex questions requiring multiple cognitive operations. Reading span scores were positively correlated with total words recalled, and negatively correlated with switching costs and word omissions. Task switching speed (Trail-B time) was also significantly correlated with recall accuracy. Overall, this study highlights (i) the listening costs associated with shifts in spatial attention and (ii) the important role of working memory in maintaining goal relevant information and extracting meaning from dynamic multi-talker conversations

  2. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    PubMed

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-08-17

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.

  3. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors

    PubMed Central

    Chen, Zhaocong; Wong, Francis C. K.; Jones, Jeffery A.; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-01-01

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production. PMID:26278337

  4. Exploring Visuospatial Thinking in Learning about Mineralogy: Spatial Orientation Ability and Spatial Visualization Ability

    ERIC Educational Resources Information Center

    Ozdemir, Gokhan

    2010-01-01

    This mixed-method research attempted to clarify the role of visuospatial abilities in learning about mineralogy. Various sources of data--including quantitative pre- and postmeasures of spatial visualization and spatial orientation tests and achievement scores on six measures and qualitative unstructured observations, interviews, and field trip…

  5. Contributions of Spatial Working Memory to Visuomotor Learning

    ERIC Educational Resources Information Center

    Anguera, Joaquin A.; Reuter-Lorenz, Patricia A.; Willingham, Daniel T.; Seidler, Rachael D.

    2010-01-01

    Previous studies of motor learning have described the importance of cognitive processes during the early stages of learning; however, the precise nature of these processes and their neural correlates remains unclear. The present study investigated whether spatial working memory (SWM) contributes to visuomotor adaptation depending on the stage of…

  6. Contributions of Spatial Working Memory to Visuomotor Learning

    ERIC Educational Resources Information Center

    Anguera, Joaquin A.; Reuter-Lorenz, Patricia A.; Willingham, Daniel T.; Seidler, Rachael D.

    2010-01-01

    Previous studies of motor learning have described the importance of cognitive processes during the early stages of learning; however, the precise nature of these processes and their neural correlates remains unclear. The present study investigated whether spatial working memory (SWM) contributes to visuomotor adaptation depending on the stage of…

  7. The role of diffusive architectural surfaces on auditory spatial discrimination in performance venues.

    PubMed

    Robinson, Philip W; Pätynen, Jukka; Lokki, Tapio; Jang, Hyung Suk; Jeon, Jin Yong; Xiang, Ning

    2013-06-01

    In musical or theatrical performance, some venues allow listeners to individually localize and segregate individual performers, while others produce a well blended ensemble sound. The room acoustic conditions that make this possible, and the psycho-acoustic effects at work are not fully understood. This research utilizes auralizations from measured and simulated performance venues to investigate spatial discrimination of multiple acoustic sources in rooms. Signals were generated from measurements taken in a small theater, and listeners in the audience area were asked to distinguish pairs of speech sources on stage with various spatial separations. This experiment was repeated with the proscenium splay walls treated to be flat, diffusive, or absorptive. Similar experiments were conducted in a simulated hall, utilizing 11 early reflections with various characteristics, and measured late reverberation. The experiments reveal that discriminating the lateral arrangement of two sources is possible at narrower separation angles when reflections come from flat or absorptive rather than diffusive surfaces.

  8. On the Auditory-Proprioception Substitution Hypothesis: Movement Sonification in Two Deafferented Subjects Learning to Write New Characters

    PubMed Central

    Danna, Jérémy; Velay, Jean-Luc

    2017-01-01

    The aim of this study was to evaluate the compensatory effects of real-time auditory feedback on two proprioceptively deafferented subjects. The real-time auditory feedback was based on a movement sonification approach, consisting of translating some movement variables into synthetic sounds to make them audible. The two deafferented subjects and 16 age-matched control participants were asked to learn four new characters. The characters were learned under two different conditions, one without sonification and one with sonification, respecting a within-subject protocol. The results revealed that characters learned with sonification were reproduced more quickly and more fluently than characters learned without and that the effects of sonification were larger in deafferented than in control subjects. Secondly, whereas control subjects were able to learn the characters without sounds the deafferented subjects were able to learn them only when they were trained with sonification. Thirdly, although the improvement was still present in controls, the performance of deafferented subjects came back to the pre-test level 2 h after the training with sounds. Finally, the two deafferented subjects performed differently from each other, highlighting the importance of studying at least two subjects to better understand the loss of proprioception and its impact on motor control and learning. To conclude, movement sonification may compensate for a lack of proprioception, supporting the auditory-proprioception substitution hypothesis. However, sonification would act as a “sensory prosthesis” helping deafferented subjects to better feel their movements, without permanently modifying their motor performance once the prosthesis is removed. Potential clinical applications for motor rehabilitation are numerous: people with a limb prosthesis, with a stroke, or with some peripheral nerve injury may potentially be interested. PMID:28386211

  9. On the Auditory-Proprioception Substitution Hypothesis: Movement Sonification in Two Deafferented Subjects Learning to Write New Characters.

    PubMed

    Danna, Jérémy; Velay, Jean-Luc

    2017-01-01

    The aim of this study was to evaluate the compensatory effects of real-time auditory feedback on two proprioceptively deafferented subjects. The real-time auditory feedback was based on a movement sonification approach, consisting of translating some movement variables into synthetic sounds to make them audible. The two deafferented subjects and 16 age-matched control participants were asked to learn four new characters. The characters were learned under two different conditions, one without sonification and one with sonification, respecting a within-subject protocol. The results revealed that characters learned with sonification were reproduced more quickly and more fluently than characters learned without and that the effects of sonification were larger in deafferented than in control subjects. Secondly, whereas control subjects were able to learn the characters without sounds the deafferented subjects were able to learn them only when they were trained with sonification. Thirdly, although the improvement was still present in controls, the performance of deafferented subjects came back to the pre-test level 2 h after the training with sounds. Finally, the two deafferented subjects performed differently from each other, highlighting the importance of studying at least two subjects to better understand the loss of proprioception and its impact on motor control and learning. To conclude, movement sonification may compensate for a lack of proprioception, supporting the auditory-proprioception substitution hypothesis. However, sonification would act as a "sensory prosthesis" helping deafferented subjects to better feel their movements, without permanently modifying their motor performance once the prosthesis is removed. Potential clinical applications for motor rehabilitation are numerous: people with a limb prosthesis, with a stroke, or with some peripheral nerve injury may potentially be interested.

  10. Rey's Auditory Verbal Learning Test scores can be predicted from whole brain MRI in Alzheimer's disease.

    PubMed

    Moradi, Elaheh; Hallikainen, Ilona; Hänninen, Tuomo; Tohka, Jussi

    2017-01-01

    Rey's Auditory Verbal Learning Test (RAVLT) is a powerful neuropsychological tool for testing episodic memory, which is widely used for the cognitive assessment in dementia and pre-dementia conditions. Several studies have shown that an impairment in RAVLT scores reflect well the underlying pathology caused by Alzheimer's disease (AD), thus making RAVLT an effective early marker to detect AD in persons with memory complaints. We investigated the association between RAVLT scores (RAVLT Immediate and RAVLT Percent Forgetting) and the structural brain atrophy caused by AD. The aim was to comprehensively study to what extent the RAVLT scores are predictable based on structural magnetic resonance imaging (MRI) data using machine learning approaches as well as to find the most important brain regions for the estimation of RAVLT scores. For this, we built a predictive model to estimate RAVLT scores from gray matter density via elastic net penalized linear regression model. The proposed approach provided highly significant cross-validated correlation between the estimated and observed RAVLT Immediate (R = 0.50) and RAVLT Percent Forgetting (R = 0.43) in a dataset consisting of 806 AD, mild cognitive impairment (MCI) or healthy subjects. In addition, the selected machine learning method provided more accurate estimates of RAVLT scores than the relevance vector regression used earlier for the estimation of RAVLT based on MRI data. The top predictors were medial temporal lobe structures and amygdala for the estimation of RAVLT Immediate and angular gyrus, hippocampus and amygdala for the estimation of RAVLT Percent Forgetting. Further, the conversion of MCI subjects to AD in 3-years could be predicted based on either observed or estimated RAVLT scores with an accuracy comparable to MRI-based biomarkers.

  11. Selective importance of the rat anterior thalamic nuclei for configural learning involving distal spatial cues.

    PubMed

    Dumont, Julie R; Amin, Eman; Aggleton, John P

    2014-01-01

    To test potential parallels between hippocampal and anterior thalamic function, rats with anterior thalamic lesions were trained on a series of biconditional learning tasks. The anterior thalamic lesions did not disrupt learning two biconditional associations in operant chambers where a specific auditory stimulus (tone or click) had a differential outcome depending on whether it was paired with a particular visual context (spot or checkered wall-paper) or a particular thermal context (warm or cool). Likewise, rats with anterior thalamic lesions successfully learnt a biconditional task when they were reinforced for digging in one of two distinct cups (containing either beads or shredded paper), depending on the particular appearance of the local context on which the cup was placed (one of two textured floors). In contrast, the same rats were severely impaired at learning the biconditional rule to select a specific cup when in a particular location within the test room. Place learning was then tested with a series of go/no-go discriminations. Rats with anterior thalamic nuclei lesions could learn to discriminate between two locations when they were approached from a constant direction. They could not, however, use this acquired location information to solve a subsequent spatial biconditional task where those same places dictated the correct choice of digging cup. Anterior thalamic lesions produced a selective, but severe, biconditional learning deficit when the task incorporated distal spatial cues. This deficit mirrors that seen in rats with hippocampal lesions, so extending potential interdependencies between the two sites. © 2013 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Effects of musicality and motivational orientation on auditory category learning: a test of a regulatory-fit hypothesis.

    PubMed

    McAuley, J Devin; Henry, Molly J; Wedd, Alan; Pleskac, Timothy J; Cesario, Joseph

    2012-02-01

    Two experiments investigated the effects of musicality and motivational orientation on auditory category learning. In both experiments, participants learned to classify tone stimuli that varied in frequency and duration according to an initially unknown disjunctive rule; feedback involved gaining points for correct responses (a gains reward structure) or losing points for incorrect responses (a losses reward structure). For Experiment 1, participants were told at the start that musicians typically outperform nonmusicians on the task, and then they were asked to identify themselves as either a "musician" or a "nonmusician." For Experiment 2, participants were given either a promotion focus prime (a performance-based opportunity to gain entry into a raffle) or a prevention focus prime (a performance-based criterion that needed to be maintained to avoid losing an entry into a raffle) at the start of the experiment. Consistent with a regulatory-fit hypothesis, self-identified musicians and promotion-primed participants given a gains reward structure made more correct tone classifications and were more likely to discover the optimal disjunctive rule than were musicians and promotion-primed participants experiencing losses. Reward structure (gains vs. losses) had inconsistent effects on the performance of nonmusicians, and a weaker regulatory-fit effect was found for the prevention focus prime. Overall, the findings from this study demonstrate a regulatory-fit effect in the domain of auditory category learning and show that motivational orientation may contribute to musician performance advantages in auditory perception.

  13. Spatial parameters at the basis of social transfer of learning.

    PubMed

    Lugli, Luisa; Iani, Cristina; Milanese, Nadia; Sebanz, Natalie; Rubichi, Sandro

    2015-06-01

    Recent research indicates that practicing on a joint spatial compatibility task with an incompatible stimulus-response mapping affects subsequent joint Simon task performance, eliminating the social Simon effect. It has been well established that in individual contexts, for transfer of learning to occur, participants need to practice an incompatible association between stimulus and response positions. The mechanisms underlying transfer of learning in joint task performance are, however, less well understood. The present study was aimed at assessing the relative contribution of 3 different spatial relations characterizing the joint practice context: stimulus-response, stimulus-participant, and participant-response relations. In 3 experiments, the authors manipulated the stimulus-response, stimulus-participant, and response-participant associations. We found that learning from the practice task did not transfer to the subsequent task when during practice stimulus-response associations were spatially incompatible and stimulus-participant associations were compatible (Experiment 1). However, a transfer of learning was evident when stimulus-participant associations were spatially incompatible. This occurred both when response-participant associations were incompatible (Experiment 2) and when they were compatible (Experiment 3). These results seem to support an agent corepresentation account of correspondence effects emerging in joint settings since they suggest that, in social contexts, critical to obtain transfer-of-learning effects is the spatial relation between stimulus and participant positions while the spatial relation between stimulus and response positions is irrelevant.

  14. Auditory map plasticity: Diversity in causes and consequences

    PubMed Central

    Schreiner, Christoph E.; Polley, Daniel B.

    2014-01-01

    Auditory cortical maps have been a long-standing focus of studies that assess the expression, mechanisms, and consequences of sensory plasticity. Here we discuss recent progress in understanding how auditory experience transforms spatially organized sound representations at higher levels of the central auditory pathways. New insights into the mechanisms underlying map changes have been achieved and more refined interpretations of various map plasticity effects and their consequences in terms of behavioral corollaries and learning as well as other cognitive aspects have been offered. The systematic organizational principles of cortical sound processing remains a key-aspect in studying and interpreting the role of plasticity in hearing. PMID:24492090

  15. Altering spatial priority maps via reward-based learning.

    PubMed

    Chelazzi, Leonardo; Eštočinová, Jana; Calletti, Riccardo; Lo Gerfo, Emanuele; Sani, Ilaria; Della Libera, Chiara; Santandrea, Elisa

    2014-06-18

    Spatial priority maps are real-time representations of the behavioral salience of locations in the visual field, resulting from the combined influence of stimulus driven activity and top-down signals related to the current goals of the individual. They arbitrate which of a number of (potential) targets in the visual scene will win the competition for attentional resources. As a result, deployment of visual attention to a specific spatial location is determined by the current peak of activation (corresponding to the highest behavioral salience) across the map. Here we report a behavioral study performed on healthy human volunteers, where we demonstrate that spatial priority maps can be shaped via reward-based learning, reflecting long-lasting alterations (biases) in the behavioral salience of specific spatial locations. These biases exert an especially strong influence on performance under conditions where multiple potential targets compete for selection, conferring competitive advantage to targets presented in spatial locations associated with greater reward during learning relative to targets presented in locations associated with lesser reward. Such acquired biases of spatial attention are persistent, are nonstrategic in nature, and generalize across stimuli and task contexts. These results suggest that reward-based attentional learning can induce plastic changes in spatial priority maps, endowing these representations with the "intelligent" capacity to learn from experience.

  16. Extending and Applying the EPIC Architecture for Human Cognition and Performance: Auditory and Spatial Components

    DTIC Science & Technology

    2013-03-20

    differences are receded into semitone differences. After considering a statistical approach popular in machine learning, we decided to first...speakers apart within gender. Development of a "smarter" stream tracking algorithm is part of the proposed work. 14 i on MM oluo ton» i»"» 0 WO ...less synchronized than the color words, both of which would make the digits easier to recognize than the colors. In addition, note that digit

  17. Influence of auditory spatial attention on cross-modal semantic priming effect: evidence from N400 effect.

    PubMed

    Wang, Hongyan; Zhang, Gaoyan; Liu, Baolin

    2017-01-01

    Semantic priming is an important research topic in the field of cognitive neuroscience. Previous studies have shown that the uni-modal semantic priming effect can be modulated by attention. However, the influence of attention on cross-modal semantic priming is unclear. To investigate this issue, the present study combined a cross-modal semantic priming paradigm with an auditory spatial attention paradigm, presenting the visual pictures as the prime stimuli and the semantically related or unrelated sounds as the target stimuli. Event-related potentials results showed that when the target sound was attended to, the N400 effect was evoked. The N400 effect was also observed when the target sound was not attended to, demonstrating that the cross-modal semantic priming effect persists even though the target stimulus is not focused on. Further analyses revealed that the N400 effect evoked by the unattended sound was significantly lower than the effect evoked by the attended sound. This contrast provides new evidence that the cross-modal semantic priming effect can be modulated by attention.

  18. Modified Navigation Instructions for Spatial Navigation Assistance Systems Lead to Incidental Spatial Learning.

    PubMed

    Gramann, Klaus; Hoepner, Paul; Karrer-Gauss, Katja

    2017-01-01

    Spatial cognitive skills deteriorate with the increasing use of automated GPS navigation and a general decrease in the ability to orient in space might have further impact on independence, autonomy, and quality of life. In the present study we investigate whether modified navigation instructions support incidental spatial knowledge acquisition. A virtual driving environment was used to examine the impact of modified navigation instructions on spatial learning while using a GPS navigation assistance system. Participants navigated through a simulated urban and suburban environment, using navigation support to reach their destination. Driving performance as well as spatial learning was thereby assessed. Three navigation instruction conditions were tested: (i) a control group that was provided with classical navigation instructions at decision points, and two other groups that received navigation instructions at decision points including either (ii) additional irrelevant information about landmarks or (iii) additional personally relevant information (i.e., individual preferences regarding food, hobbies, etc.), associated with landmarks. Driving performance revealed no differences between navigation instructions. Significant improvements were observed in both modified navigation instruction conditions on three different measures of spatial learning and memory: subsequent navigation of the initial route without navigation assistance, landmark recognition, and sketch map drawing. Future navigation assistance systems could incorporate modified instructions to promote incidental spatial learning and to foster more general spatial cognitive abilities. Such systems might extend mobility across the lifespan.

  19. Modified Navigation Instructions for Spatial Navigation Assistance Systems Lead to Incidental Spatial Learning

    PubMed Central

    Gramann, Klaus; Hoepner, Paul; Karrer-Gauss, Katja

    2017-01-01

    Spatial cognitive skills deteriorate with the increasing use of automated GPS navigation and a general decrease in the ability to orient in space might have further impact on independence, autonomy, and quality of life. In the present study we investigate whether modified navigation instructions support incidental spatial knowledge acquisition. A virtual driving environment was used to examine the impact of modified navigation instructions on spatial learning while using a GPS navigation assistance system. Participants navigated through a simulated urban and suburban environment, using navigation support to reach their destination. Driving performance as well as spatial learning was thereby assessed. Three navigation instruction conditions were tested: (i) a control group that was provided with classical navigation instructions at decision points, and two other groups that received navigation instructions at decision points including either (ii) additional irrelevant information about landmarks or (iii) additional personally relevant information (i.e., individual preferences regarding food, hobbies, etc.), associated with landmarks. Driving performance revealed no differences between navigation instructions. Significant improvements were observed in both modified navigation instruction conditions on three different measures of spatial learning and memory: subsequent navigation of the initial route without navigation assistance, landmark recognition, and sketch map drawing. Future navigation assistance systems could incorporate modified instructions to promote incidental spatial learning and to foster more general spatial cognitive abilities. Such systems might extend mobility across the lifespan. PMID:28243219

  20. The Role of Right Inferior Parietal Cortex in Auditory Spatial Attention: A Repetitive Transcranial Magnetic Stimulation Study

    PubMed Central

    Karhson, Debra S.; Mock, Jeffrey R.; Golob, Edward J.

    2015-01-01

    Behavioral studies support the concept of an auditory spatial attention gradient by demonstrating that attentional benefits progressively diminish as distance increases from an attended location. Damage to the right inferior parietal cortex can induce a rightward attention bias, which implicates this region in the construction of attention gradients. This study used event-related potentials (ERPs) to define attention-related gradients before and after repetitive transcranial magnetic stimulation (rTMS) to the right inferior parietal cortex. Subjects (n = 16) listened to noise bursts at five azimuth locations (left to right: -90°, -45°, 0° midline, +45°, +90°) and responded to stimuli at one target location (-90°, +90°, separate blocks). ERPs as a function of non-target location were examined before (baseline) and after 0.9 Hz rTMS. Results showed that ERP attention gradients were observed in three time windows (frontal 230–340, parietal 400–460, frontal 550–750 ms). Significant transient rTMS effects were seen in the first and third windows. The first window had a voltage decrease at the farthest location when attending to either the left or right side. The third window had on overall increase in positivity, but only when attending to the left side. These findings suggest that rTMS induced a small contraction in spatial attention gradients within the first time window. The asymmetric effect of attended location on gradients in the third time window may relate to neglect of the left hemispace after right parietal injury. Together, these results highlight the role of the right inferior parietal cortex in modulating frontal lobe attention network activity. PMID:26636333

  1. Impaired spatial and non-spatial configural learning in patients with hippocampal pathology

    PubMed Central

    Kumaran, Dharshan; Hassabis, Demis; Spiers, Hugo J.; Vann, Seralynne D.; Vargha-Khadem, Faraneh; Maguire, Eleanor A.

    2007-01-01

    The hippocampus has been proposed to play a critical role in memory through its unique ability to bind together the disparate elements of an experience. This hypothesis has been widely examined in rodents using a class of tasks known as “configural” or “non-linear”, where outcomes are determined by specific combinations of elements, rather than any single element alone. On the basis of equivocal evidence that hippocampal lesions impair performance on non-spatial configural tasks, it has been proposed that the hippocampus may only be critical for spatial configural learning. Surprisingly few studies in humans have examined the role of the hippocampus in solving configural problems. In particular, no previous study has directly assessed the human hippocampal contribution to non-spatial and spatial configural learning, the focus of the current study. Our results show that patients with primary damage to the hippocampus bilaterally were similarly impaired at configural learning within both spatial and non-spatial domains. Our data also provide evidence that residual configural learning can occur in the presence of significant hippocampal dysfunction. Moreover, evidence obtained from a post-experimental debriefing session suggested that patients acquired declarative knowledge of the underlying task contingencies that corresponded to the best-fit strategy identified by our strategy analysis. In summary, our findings support the notion that the hippocampus plays an important role in both spatial and non-spatial configural learning, and provide insights into the role of the medial temporal lobe (MTL) more generally in incremental reinforcement-driven learning. PMID:17507060

  2. Divided multimodal attention sensory trace and context coding strategies in spatially congruent auditory and visual presentation.

    PubMed

    Kristjánsson, Tómas; Thorvaldsson, Tómas Páll; Kristjánsson, Arni

    2014-01-01

    Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.

  3. Culturally inconsistent spatial structure reduces learning.

    PubMed

    McCrink, Koleen; Shaki, Samuel

    2016-09-01

    Human adults tend to use a spatial continuum to organize any information they consider to be well-ordered, with a sense of initial and final position. The directionality of this spatial mapping is mediated by the culture of the subject, largely as a function of the prevailing reading and writing habits (for example, from left-to-right for English speakers or right-to-left for Hebrew speakers). In the current study, we tasked American and Israeli subjects with encoding and recalling a set of arbitrary pairings, consisting of frequently ordered stimuli (letters with shapes: Experiment 1) or infrequently ordered stimuli (color terms with shapes: Experiment 2), that were serially presented in a left-to-right, right-to-left, or central-only manner. The subjects were better at recalling information that contained ordinal stimuli if the spatial flow of presentation during encoding matched the dominant directionality of the subjects' culture, compared to information encoded in the non-dominant direction. This phenomenon did not extend to infrequently ordered stimuli (e.g., color terms). These findings suggest that adults implicitly harness spatial organization to support memory, and this harnessing process is culturally mediated in tandem with our spatial biases.

  4. Culturally Inconsistent Spatial Structure Reduces Learning

    PubMed Central

    McCrink, Koleen; Shaki, Samuel

    2016-01-01

    Human adults tend to use a spatial continuum to organize any information they consider to be well-ordered, with a sense of initial and final position. The directionality of this spatial mapping is mediated by the culture of the subject, largely as a function of the prevailing reading and writing habits (for example, from left-to-right for English speakers or right-to-left for Hebrew speakers). In the current study, we tasked American and Israeli subjects with encoding and recalling a set of arbitrary pairings, consisting of frequently ordered stimuli (letters with shapes: Experiment 1) or infrequently ordered stimuli (color terms with shapes: Experiment 2), that were serially presented in a left-to-right, right-to-left, or central-only manner. The subjects were better at recalling information that contained ordinal stimuli if the spatial flow of presentation during encoding matched the dominant directionality of the subjects’ culture, compared to information encoded in the non-dominant direction. This phenomenon did not extend to infrequently ordered stimuli (e.g., color terms). These findings suggest that adults implicitly harness spatial organization to support memory, and this harnessing process is culturally mediated in tandem with our spatial biases. PMID:27208418

  5. Evaluation of performance validity using a Rey Auditory Verbal Learning Test forced-choice trial.

    PubMed

    Ashendorf, Lee; Sugarman, Michael A

    2016-05-01

    Forced-choice (FC) recognition memory is a common performance validity assessment methodology. This study introduces and evaluates the classification accuracy of a FC recognition trial for the Rey Auditory Verbal Learning Test (RAVLT). The present sample of 122 military veterans (Mean age = 35.4, SD = 9.3) were all administered the RAVLT along with the FC procedure as part of a full neuropsychological protocol. Veterans were assigned to valid (n = 94) or invalid (n = 28) groups based on outcomes of performance validity measures. The FC procedure was found to have strong sensitivity (67.9%) and specificity (92.6%) in predicting validity group status based on a cutoff score of ≤ 13. The FC trial outperformed RAVLT recognition hits (sensitivity = 46.4%, specificity = 91.5%) as a predictor of invalid performance. The RAVLT FC is demonstrated to be an effective measure of performance validity and is recommended for use as an adjunctive trial for the RAVLT.

  6. Regulation of learned vocal behavior by an auditory motor cortical nucleus in juvenile zebra finches.

    PubMed

    Naie, Katja; Hahnloser, Richard H R

    2011-07-01

    In the process of song learning, songbirds such as the zebra finch shape their initial soft and poorly formed vocalizations (subsong) first into variable plastic songs with a discernable recurring motif and then into highly stereotyped adult songs. A premotor brain area critically involved in plastic and adult song production is the cortical nucleus HVC. One of HVC's primary afferents, the nucleus interface of the nidopallium (NIf), provides a significant source of auditory input to HVC. However, the premotor involvement of NIf has not been extensively studied yet. Here we report that brief and reversible pharmacological inactivation of NIf in juvenile birds leads to transient degradation of plastic song toward subsong, as revealed by spectral and temporal song features. No such song degradation is seen following NIf inactivation in adults. However, in both juveniles and adults NIf inactivation leads to a transient decrease in song stereotypy. Our findings reveal a contribution of NIf to song production in juveniles that agrees with its known role in adults in mediating thalamic drive to downstream vocal motor areas during sleep.

  7. Effects of lips and hands on auditory learning of second-language speech sounds.

    PubMed

    Hirata, Yukari; Kelly, Spencer D

    2010-04-01

    Previous research has found that auditory training helps native English speakers to perceive phonemic vowel length contrasts in Japanese, but their performance did not reach native levels after training. Given that multimodal information, such as lip movement and hand gesture, influences many aspects of native language processing, the authors examined whether multimodal input helps to improve native English speakers' ability to perceive Japanese vowel length contrasts. Sixty native English speakers participated in 1 of 4 types of training: (a) audio-only; (b) audio-mouth; (c) audio-hands; and (d) audio-mouth-hands. Before and after training, participants were given phoneme perception tests that measured their ability to identify short and long vowels in Japanese (e.g., /kato/ vs. /kato/). Although all 4 groups improved from pre- to posttest (replicating previous research), the participants in the audio-mouth condition improved more than those in the audio-only condition, whereas the 2 conditions involving hand gestures did not. Seeing lip movements during training significantly helps learners to perceive difficult second-language phonemic contrasts, but seeing hand gestures does not. The authors discuss possible benefits and limitations of using multimodal information in second-language phoneme learning.

  8. Associative learning shapes the neural code for stimulus magnitude in primary auditory cortex.

    PubMed

    Polley, Daniel B; Heiser, Marc A; Blake, David T; Schreiner, Christoph E; Merzenich, Michael M

    2004-11-16

    Since the dawn of experimental psychology, researchers have sought an understanding of the fundamental relationship between the amplitude of sensory stimuli and the magnitudes of their perceptual representations. Contemporary theories support the view that magnitude is encoded by a linear increase in firing rate established in the primary afferent pathways. In the present study, we have investigated sound intensity coding in the rat primary auditory cortex (AI) and describe its plasticity by following paired stimulus reinforcement and instrumental conditioning paradigms. In trained animals, population-response strengths in AI became more strongly nonlinear with increasing stimulus intensity. Individual AI responses became selective to more restricted ranges of sound intensities and, as a population, represented a broader range of preferred sound levels. These experiments demonstrate that the representation of stimulus magnitude can be powerfully reshaped by associative learning processes and suggest that the code for sound intensity within AI can be derived from intensity-tuned neurons that change, rather than simply increase, their firing rates in proportion to increases in sound intensity.

  9. Experimental evidence for spatial learning on octopuses (octopus bimaculoides).

    PubMed

    Boal, J G; Dunham, A W; Williams, K T; Hanlon, R T

    2000-09-01

    Octopuses forage far from temporary home dens to which they return for shelter. Spatial tasks may assess learning. Octopuses (Octopus bimaculoides) were placed in a novel arena, and their movements were tracked for 72 hr. Movements around the arena decreased across time, consistent with exploratory learning. Next, octopuses were given 23 hr to move around an arena; after a 24-hr delay, their memory of a burrow location was tested. Most remembered the location of the open burrow, demonstrating learning in 1 day. Finally, octopuses were trained to locate a single open escape burrow among 6 possible locations. Retention was tested after a week and was immediately followed by reversal training (location rotated 180 degrees ). Octopuses learned the original location of the burrow, remembering it for a week. Path lengths increased significantly after reversal, gradually improving and showing relearning. Octopuses show exploratory behavior, learning, and retention of spatial information.

  10. Mice with Deficient BK Channel Function Show Impaired Prepulse Inhibition and Spatial Learning, but Normal Working and Spatial Reference Memory

    PubMed Central

    Azzopardi, Erin; Ruettiger, Lukas; Ruth, Peter; Schmid, Susanne

    2013-01-01

    Genetic variations in the large-conductance, voltage- and calcium activated potassium channels (BK channels) have been recently implicated in mental retardation, autism and schizophrenia which all come along with severe cognitive impairments. In the present study we investigate the effects of functional BK channel deletion on cognition using a genetic mouse model with a knock-out of the gene for the pore forming α-subunit of the channel. We tested the F1 generation of a hybrid SV129/C57BL6 mouse line in which the slo1 gene was deleted in both parent strains. We first evaluated hearing and motor function to establish the suitability of this model for cognitive testing. Auditory brain stem responses to click stimuli showed no threshold differences between knockout mice and their wild-type littermates. Despite of muscular tremor, reduced grip force, and impaired gait, knockout mice exhibited normal locomotion. These findings allowed for testing of sensorimotor gating using the acoustic startle reflex, as well as of working memory, spatial learning and memory in the Y-maze and the Morris water maze, respectively. Prepulse inhibition on the first day of testing was normal, but the knockout mice did not improve over the days of testing as their wild-type littermates did. Spontaneous alternation in the y-maze was normal as well, suggesting that the BK channel knock-out does not impair working memory. In the Morris water maze knock-out mice showed significantly slower acquisition of the task, but normal memory once the task was learned. Thus, we propose a crucial role of the BK channels in learning, but not in memory storage or recollection. PMID:24303038

  11. Visual and Spatial Modes in Science Learning

    ERIC Educational Resources Information Center

    Ramadas, Jayashree

    2009-01-01

    This paper surveys some major trends from research on visual and spatial thinking coming from cognitive science, developmental psychology, science literacy, and science studies. It explores the role of visualisation in creativity, in building mental models, and in the communication of scientific ideas, in order to place these findings in the…

  12. Landscapes, Spatial Justice and Learning Communities

    ERIC Educational Resources Information Center

    Armstrong, Felicity

    2012-01-01

    This paper draws on a study of a community-based adult education initiative, "Cumbria Credits," which took place during the period of serious economic decline which hit sections of the farming and the wider community in Cumbria during 2001. It draws on the principles underpinning Edward Soja's notion of "spatial justice" to explore transformations…

  13. Visual and Spatial Modes in Science Learning

    ERIC Educational Resources Information Center

    Ramadas, Jayashree

    2009-01-01

    This paper surveys some major trends from research on visual and spatial thinking coming from cognitive science, developmental psychology, science literacy, and science studies. It explores the role of visualisation in creativity, in building mental models, and in the communication of scientific ideas, in order to place these findings in the…

  14. Landscapes, Spatial Justice and Learning Communities

    ERIC Educational Resources Information Center

    Armstrong, Felicity

    2012-01-01

    This paper draws on a study of a community-based adult education initiative, "Cumbria Credits," which took place during the period of serious economic decline which hit sections of the farming and the wider community in Cumbria during 2001. It draws on the principles underpinning Edward Soja's notion of "spatial justice" to explore transformations…

  15. Terminal feedback outperforms concurrent visual, auditory, and haptic feedback in learning a complex rowing-type task.

    PubMed

    Sigrist, Roland; Rauter, Georg; Riener, Robert; Wolf, Peter

    2013-01-01

    Augmented feedback, provided by coaches or displays, is a well-established strategy to accelerate motor learning. Frequent terminal feedback and concurrent feedback have been shown to be detrimental for simple motor task learning but supportive for complex motor task learning. However, conclusions on optimal feedback strategies have been mainly drawn from studies on artificial laboratory tasks with visual feedback only. Therefore, the authors compared the effectiveness of learning a complex, 3-dimensional rowing-type task with either concurrent visual, auditory, or haptic feedback to self-controlled terminal visual feedback. Results revealed that terminal visual feedback was most effective because it emphasized the internalization of task-relevant aspects. In contrast, concurrent feedback fostered the correction of task-irrelevant errors, which hindered learning. The concurrent visual and haptic feedback group performed much better during training with the feedback than in nonfeedback trials. Auditory feedback based on sonification of the movement error was not practical for training the 3-dimensional movement for most participants. Concurrent multimodal feedback in combination with terminal feedback may be most effective, especially if the feedback strategy is adapted to individual preferences and skill level.

  16. Engineering genders: A spatial analysis of engineering, gender, and learning

    NASA Astrophysics Data System (ADS)

    Weidler-Lewis, Joanna R.

    This three article dissertation is an investigation into the ontology of learning insofar as learning is a process of becoming. In each article I explore the general questions of who is learning, in what ways, and with what consequences. The context for this research is undergraduate engineering education with particular attention to the construction of gender in this context. The first article is an examination of the organization of freshman engineering design. The second article draws on Lefebvre's spatial triad as both a theory and method for studying learning. The third article is an interview study of LGBTQA students creating their futures as engineers.

  17. Learning impaired children exhibit timing deficits and training-related improvements in auditory cortical responses to speech in noise.

    PubMed

    Warrier, Catherine M; Johnson, Krista L; Hayes, Erin A; Nicol, Trent; Kraus, Nina

    2004-08-01

    The physiological mechanisms that contribute to abnormal encoding of speech in children with learning problems are yet to be well understood. Furthermore, speech perception problems appear to be particularly exacerbated by background noise in this population. This study compared speech-evoked cortical responses recorded in a noisy background to those recorded in quiet in normal children (NL) and children with learning problems (LP). Timing differences between responses recorded in quiet and in background noise were assessed by cross-correlating the responses with each other. Overall response magnitude was measured with root-mean-square (RMS) amplitude. Cross-correlation scores indicated that 23% of LP children exhibited cortical neural timing abnormalities such that their neurophysiological representation of speech sounds became distorted in the presence of background noise. The latency of the N2 response in noise was isolated as being the root of this distortion. RMS amplitudes in these children did not differ from NL children, indicating that this result was not due to a difference in response magnitude. LP children who participated in a commercial auditory training program and exhibited improved cortical timing also showed improvements in phonological perception. Consequently, auditory pathway timing deficits can be objectively observed in LP children, and auditory training can diminish these deficits.

  18. The Use of Music and Other Forms of Organized Sound as a Therapeutic Intervention for Students with Auditory Processing Disorder: Providing the Best Auditory Experience for Children with Learning Differences

    ERIC Educational Resources Information Center

    Faronii-Butler, Kishasha O.

    2013-01-01

    This auto-ethnographical inquiry used vignettes and interviews to examine the therapeutic use of music and other forms of organized sound in the learning environment of individuals with Central Auditory Processing Disorders. It is an investigation of the traditions of healing with sound vibrations, from its earliest cultural roots in shamanism and…

  19. The Use of Music and Other Forms of Organized Sound as a Therapeutic Intervention for Students with Auditory Processing Disorder: Providing the Best Auditory Experience for Children with Learning Differences

    ERIC Educational Resources Information Center

    Faronii-Butler, Kishasha O.

    2013-01-01

    This auto-ethnographical inquiry used vignettes and interviews to examine the therapeutic use of music and other forms of organized sound in the learning environment of individuals with Central Auditory Processing Disorders. It is an investigation of the traditions of healing with sound vibrations, from its earliest cultural roots in shamanism and…

  20. Asymmetry in Auditory and Spatial Attention Span in Normal Elderly Genetically At Risk for Alzheimer's Disease

    PubMed Central

    JACOBSON, MARK W.; DELIS, DEAN C.; BONDI, MARK W.; SALMON, DAVID P.

    2010-01-01

    Some studies of elderly individuals with the ApoE-e4 genotype noted subtle deficits on tests of attention such as the WAIS-R Digit Span subtest, but these findings have not been consistently reported. One possible explanation for the inconsistent results could be the presence of subgroups of e4+ individuals with asymmetric cognitive profiles (i.e., significant discrepancies between verbal and visuospatial skills). Comparing genotype groups with individual, modality-specific tests might obscure subtle differences between verbal and visuospatial attention in these asymmetric subgroups. In this study, we administered the WAIS-R Digit Span and WMS-R Visual Memory Span subtests to 21 nondemented elderly e4+ individuals and 21 elderly e4- individuals matched on age, education, and overall cognitive ability. We hypothesized that a) the e4+ group would show a higher incidence of asymmetric cognitive profiles when comparing Digit Span/Visual Memory Span performance relative to the e4- group; and (b) an analysis of individual test performance would fail to reveal differences between the two subject groups. Although the groups’ performances were comparable on the individual attention span tests, the e4+ group showed a significantly larger discrepancy between digit span and spatial span scores compared to the e4- group. These findings suggest that contrast measures of modality-specific attentional skills may be more sensitive to subtle group differences in at-risk groups, even when the groups do not differ on individual comparisons of standardized test means. The increased discrepancy between verbal and visuospatial attention may reflect the presence of “subgroups” within the ApoE-e4 group that are qualitatively similar to asymmetric subgroups commonly associated with the earliest stages of AD. PMID:15903153

  1. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training

    PubMed Central

    Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566

  2. Learning English vowels with different first-language vowel systems II: Auditory training for native Spanish and German speakers.

    PubMed

    Iverson, Paul; Evans, Bronwen G

    2009-08-01

    This study investigated whether individuals with small and large native-language (L1) vowel inventories learn second-language (L2) vowel systems differently, in order to better understand how L1 categories interfere with new vowel learning. Listener groups whose L1 was Spanish (5 vowels) or German (18 vowels) were given five sessions of high-variability auditory training for English vowels, after having been matched to assess their pre-test English vowel identification accuracy. Listeners were tested before and after training in terms of their identification accuracy for English vowels, the assimilation of these vowels into their L1 vowel categories, and their best exemplars for English (i.e., perceptual vowel space map). The results demonstrated that Germans improved more than Spanish speakers, despite the Germans' more crowded L1 vowel space. A subsequent experiment demonstrated that Spanish listeners were able to improve as much as the German group after an additional ten sessions of training, and that both groups were able to retain this learning. The findings suggest that a larger vowel category inventory may facilitate new learning, and support a hypothesis that auditory training improves identification by making the application of existing categories to L2 phonemes more automatic and efficient.

  3. Chronic exposure to broadband noise at moderate sound pressure levels spatially shifts tone-evoked responses in the rat auditory midbrain.

    PubMed

    Lau, Condon; Pienkowski, Martin; Zhang, Jevin W; McPherson, Bradley; Wu, Ed X

    2015-11-15

    Noise-induced hearing disorders are a significant public health concern. One cause of such disorders is exposure to high sound pressure levels (SPLs) above 85 dBA for eight hours/day. High SPL exposures occur in occupational and recreational settings and affect a substantial proportion of the population. However, an even larger proportion is exposed to more moderate SPLs for longer durations. Therefore, there is significant need to better understand the impact of chronic, moderate SPL exposures on auditory processing, especially in the absence of hearing loss. In this study, we applied functional magnetic resonance imaging (fMRI) with tonal acoustic stimulation on an established broadband rat exposure model (65 dB SPL, 30 kHz low-pass, 60 days). The auditory midbrain response of exposed subjects to 7 kHz stimulation (within exposure bandwidth) shifts dorsolaterally to regions that typically respond to lower stimulation frequencies. This shift is quantified by a region of interest analysis that shows that fMRI signals are higher in the dorsolateral midbrain of exposed subjects and in the ventromedial midbrain of control subjects (p<0.05). Also, the center of the responsive region in exposed subjects shifts dorsally relative to that of controls (p<0.05). A similar statistically significant shift (p<0.01) is observed using 40 kHz stimulation (above exposure bandwidth). The results suggest that high frequency midbrain regions above the exposure bandwidth spatially expand due to exposure. This expansion shifts lower frequency regions dorsolaterally. Similar observations have previously been made in the rat auditory cortex. Therefore, moderate SPL exposures affect auditory processing at multiple levels, from the auditory cortex to the midbrain. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Discrimination of schizophrenia auditory hallucinators by machine learning of resting-state functional MRI.

    PubMed

    Chyzhyk, Darya; Graña, Manuel; Öngür, Döst; Shinn, Ann K

    2015-05-01

    Auditory hallucinations (AH) are a symptom that is most often associated with schizophrenia, but patients with other neuropsychiatric conditions, and even a small percentage of healthy individuals, may also experience AH. Elucidating the neural mechanisms underlying AH in schizophrenia may offer insight into the pathophysiology associated with AH more broadly across multiple neuropsychiatric disease conditions. In this paper, we address the problem of classifying schizophrenia patients with and without a history of AH, and healthy control (HC) subjects. To this end, we performed feature extraction from resting state functional magnetic resonance imaging (rsfMRI) data and applied machine learning classifiers, testing two kinds of neuroimaging features: (a) functional connectivity (FC) measures computed by lattice auto-associative memories (LAAM), and (b) local activity (LA) measures, including regional homogeneity (ReHo) and fractional amplitude of low frequency fluctuations (fALFF). We show that it is possible to perform classification within each pair of subject groups with high accuracy. Discrimination between patients with and without lifetime AH was highest, while discrimination between schizophrenia patients and HC participants was worst, suggesting that classification according to the symptom dimension of AH may be more valid than discrimination on the basis of traditional diagnostic categories. FC measures seeded in right Heschl's gyrus (RHG) consistently showed stronger discriminative power than those seeded in left Heschl's gyrus (LHG), a finding that appears to support AH models focusing on right hemisphere abnormalities. The cortical brain localizations derived from the features with strong classification performance are consistent with proposed AH models, and include left inferior frontal gyrus (IFG), parahippocampal gyri, the cingulate cortex, as well as several temporal and prefrontal cortical brain regions. Overall, the observed findings suggest that

  5. Metabolic correlates of Rey auditory verbal learning test in elderly subjects with memory complaints.

    PubMed

    Brugnolo, Andrea; Morbelli, Silvia; Arnaldi, Dario; De Carli, Fabrizio; Accardo, Jennifer; Bossert, Irene; Dessi, Barbara; Famà, Francesco; Ferrara, Michela; Girtler, Nicola; Picco, Agnese; Rodriguez, Guido; Sambuceti, Gianmario; Nobili, Flavio

    2014-01-01

    We evaluated the brain metabolic correlates of main indexes of a widely used word list learning test, the Rey Auditory Verbal Memory Test (RAVLT), in a group of elderly subjects with memory complaints. Fifty-four subjects (age: 72.02 ± 7.45; Mini-Mental State Examination (MMSE) score: 28.9 ± 1.24) presenting at a memory clinic complaining of memory deficit, but not demented, and thirty controls (age: 71.87 ± 7.08; MMSE score: 29.1 ± 1.1) were included. Subjects with memory complaints included both patients with (amnestic mild cognitive impairment, aMCI) and without (subjective memory complaints, SMC) impairment on memory tests. All subjects underwent 18F-fluorodeoxyglucose positron emission tomography (FDG-PET), analyzed with statistical parametric. Patients with aMCI but not those with SMC showed the expected posterior cingulate-precuneus and parietal hypometabolism as compared to controls. Correlation was determined for between four indexes of the RAVLT and brain metabolism. The results show a significant correlation between the delayed recall score and metabolism in posterior cingulate gyrus of both hemispheres and in left precuneus, as well as between a score of long-term percent retention and metabolism in left posterior cingulate gyrus, precuneus, and orbitofrontal areas. These correlations survived correction for age, education, and MMSE score. No correlation was found between immediate or total recall scores and glucose metabolism. These data show the relevant role of posterior cingulate-precuneus and orbitofrontal cortices in retention and retrieval of de-contextualized verbal memory material in a group of elderly subjects with memory complaints and shed light on the topography of synaptic dysfunction in these subjects, overlapping that found in the earliest stages of Alzheimer-type neurodegeneration.

  6. DISCRIMINATION OF SCHIZOPHRENIA AUDITORY HALLUCINATORS BY MACHINE LEARNING OF RESTING-STATE FUNCTIONAL MRI

    PubMed Central

    CHYZHYK, DARYA; GRAÑA, MANUEL; ÖNGÜR, DÖST; SHINN, ANN K

    2016-01-01

    Auditory hallucinations (AH) are a symptom that is most often associated with schizophrenia, but patients with other neuropsychiatric conditions, and even a small percentage of healthy individuals, may also experience AH. Elucidating the neural mechanisms underlying AH in schizophrenia may offer insight into the pathophysiology associated with AH more broadly across multiple neuropsychiatric disease conditions. In this paper, we address the problem of classifying schizophrenia patients with and without a history of AH, and healthy control subjects. To this end, we performed feature extraction from resting state functional magnetic resonance imaging (rsfMRI) data and applied machine learning classifiers, testing two kinds of neuroimaging features: (a) functional connectivity measures computed by lattice auto-associative memories (LAAM), and (b) local activity measures, including regional homogeneity (ReHo) and fractional amplitude of low frequency fluctuations (fALFF). We show that it is possible to perform classification within each pair of subject groups with high accuracy. Discrimination between patients with and without lifetime AH was highest, while discrimination between schizophrenia patients and healthy control participants was worst, suggesting that classification according to the symptom dimension of AH may be more valid than discrimination on the basis of traditional diagnostic categories. Functional connectivity measures seeded in right Heschl’s gyrus consistently showed stronger discriminative power than those seeded in left Heschl’s gyrus, a finding that appears to support AH models focusing on right hemisphere abnormalities. The cortical brain localizations derived from the features with strong classification performance are consistent with proposed AH models, and include left inferior frontal gyrus, parahippocampal gyri, the cingulate cortex, as well as several temporal and prefrontal cortical brain regions. Overall, the observed findings suggest

  7. Spectral and spatial tuning of onset and offset response functions in auditory cortical fields A1 and CL of rhesus macaques.

    PubMed

    Ramamurthy, Deepa L; Recanzone, Gregg H

    2016-12-07

    The mammalian auditory cortex is necessary for spectral and spatial processing of acoustic stimuli. Most physiological studies of single neurons in the auditory cortex have focused on the onset and sustained portions of evoked responses, but there have been far fewer studies on the relationship between onset and offset responses. In the current study, we compared spectral and spatial tuning of onset and offset responses of neurons in primary auditory cortex (A1) and the caudolateral (CL) belt area of awake macaque monkeys. Several different metrics were used to determine the relationship between onset and offset response profiles in both frequency and space domains. In the frequency domain, a substantial proportion of neurons in A1 and CL displayed highly dissimilar best stimuli for onset- and offset-evoked responses, though even for these neurons, there was usually a large overlap in the range of frequencies that elicited onset and offset responses and distributions of tuning overlap metri