Science.gov

Sample records for auditory spatial learning

  1. Cross auditory-spatial learning in early-blind individuals.

    PubMed

    Chan, Chetwyn C H; Wong, Alex W K; Ting, Kin-Hung; Whitfield-Gabrieli, Susan; He, Jufang; Lee, Tatia M C

    2012-11-01

    Cross-modal processing enables the utilization of information received via different sensory organs to facilitate more complicated human actions. We used functional MRI on early-blind individuals to study the neural processes associated with cross auditory-spatial learning. The auditory signals, converted from echoes of ultrasonic signals emitted from a navigation device, were novel to the participants. The subjects were trained repeatedly for 4 weeks in associating the auditory signals with different distances. Subjects' blood-oxygenation-level-dependent responses were captured at baseline and after training using a sound-to-distance judgment task. Whole-brain analyses indicated that the task used in the study involved auditory discrimination as well as spatial localization. The learning process was shown to be mediated by the inferior parietal cortex and the hippocampus, suggesting the integration and binding of auditory features to distances. The right cuneus was found to possibly serve a general rather than a specific role, forming an occipital-enhanced network for cross auditory-spatial learning. This functional network is likely to be unique to those with early blindness, since the normal-vision counterparts shared activities only in the parietal cortex. PMID:21932260

  2. Learning-induced plasticity in auditory spatial representations revealed by electrical neuroimaging.

    PubMed

    Spierer, Lucas; Tardif, Eric; Sperdin, Holger; Murray, Micah M; Clarke, Stephanie

    2007-05-16

    Auditory spatial representations are likely encoded at a population level within human auditory cortices. We investigated learning-induced plasticity of spatial discrimination in healthy subjects using auditory-evoked potentials (AEPs) and electrical neuroimaging analyses. Stimuli were 100 ms white-noise bursts lateralized with varying interaural time differences. In three experiments, plasticity was induced with 40 min of discrimination training. During training, accuracy significantly improved from near-chance levels to approximately 75%. Before and after training, AEPs were recorded to stimuli presented passively with a more medial sound lateralization outnumbering a more lateral one (7:1). In experiment 1, the same lateralizations were used for training and AEP sessions. Significant AEP modulations to the different lateralizations were evident only after training, indicative of a learning-induced mismatch negativity (MMN). More precisely, this MMN at 195-250 ms after stimulus onset followed from differences in the AEP topography to each stimulus position, indicative of changes in the underlying brain network. In experiment 2, mirror-symmetric locations were used for training and AEP sessions; no training-related AEP modulations or MMN were observed. In experiment 3, the discrimination of trained plus equidistant untrained separations was tested psychophysically before and 0, 6, 24, and 48 h after training. Learning-induced plasticity lasted <6 h, did not generalize to untrained lateralizations, and was not the simple result of strengthening the representation of the trained lateralizations. Thus, learning-induced plasticity of auditory spatial discrimination relies on spatial comparisons, rather than a spatial anchor or a general comparator. Furthermore, cortical auditory representations of space are dynamic and subject to rapid reorganization. PMID:17507569

  3. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  4. Impairment of olfactory, auditory, and spatial serial reversal learning in rats recovered from pyrithiamine-induced thiamine deficiency.

    PubMed

    Mair, R G; Knoth, R L; Rabchenuk, S A; Langlais, P J

    1991-06-01

    Rats that had recovered from pyrithiamine-induced thiamine deficiency (PTD) were compared with controls for spatial, auditory, and olfactory serial reversal learning (SRL); spatial matching to sample (MTS); auditory go-no-go discrimination; and open-field exploration. PTD rats made more errors reaching criterion for SRL in all modalities but showed normal transfer effects between problems. PTD rats were also impaired in learning the go-no-go and MTS tasks and showed consistent alterations in exploratory activity. It is argued that the PTD rat, like human Korsakoff patients, have impairments of learning and memory (but spared capacity for reference memory) that extend across sensory modalities. Postmortem analyses showed normal indices of cortical cholinergic, noradrenergic, dopaminergic, and serotonergic function and consistent bilateral lesions of the thalamus, which were centered on the internal medullary lamina, and the medial mammillary nucleus. PMID:1907457

  5. Incidental Auditory Category Learning

    PubMed Central

    Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.

    2015-01-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588

  6. Auditory-motor learning influences auditory memory for music.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features. PMID:22271265

  7. Auditory spatial processing in Alzheimer's disease.

    PubMed

    Golden, Hannah L; Nicholas, Jennifer M; Yong, Keir X X; Downey, Laura E; Schott, Jonathan M; Mummery, Catherine J; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer's disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer's disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer's disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer's disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer's disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer's disease

  8. Spatial auditory processing in pinnipeds

    NASA Astrophysics Data System (ADS)

    Holt, Marla M.

    Given the biological importance of sound for a variety of activities, pinnipeds must be able to obtain spatial information about their surroundings thorough acoustic input in the absence of other sensory cues. The three chapters of this dissertation address spatial auditory processing capabilities of pinnipeds in air given that these amphibious animals use acoustic signals for reproduction and survival on land. Two chapters are comparative lab-based studies that utilized psychophysical approaches conducted in an acoustic chamber. Chapter 1 addressed the frequency-dependent sound localization abilities at azimuth of three pinniped species (the harbor seal, Phoca vitulina, the California sea lion, Zalophus californianus, and the northern elephant seal, Mirounga angustirostris). While performances of the sea lion and harbor seal were consistent with the duplex theory of sound localization, the elephant seal, a low-frequency hearing specialist, showed a decreased ability to localize the highest frequencies tested. In Chapter 2 spatial release from masking (SRM), which occurs when a signal and masker are spatially separated resulting in improvement in signal detectability relative to conditions in which they are co-located, was determined in a harbor seal and sea lion. Absolute and masked thresholds were measured at three frequencies and azimuths to determine the detection advantages afforded by this type of spatial auditory processing. Results showed that hearing sensitivity was enhanced by up to 19 and 12 dB in the harbor seal and sea lion, respectively, when the signal and masker were spatially separated. Chapter 3 was a field-based study that quantified both sender and receiver variables of the directional properties of male northern elephant seal calls produce within communication system that serves to delineate dominance status. This included measuring call directivity patterns, observing male-male vocally-mediated interactions, and an acoustic playback study

  9. Semantic Elaboration in Auditory and Visual Spatial Memory

    PubMed Central

    Taevs, Meghan; Dahmani, Louisa; Zatorre, Robert J.; Bohbot, Véronique D.

    2010-01-01

    The aim of this study was to investigate the hypothesis that semantic information facilitates auditory and visual spatial learning and memory. An auditory spatial task was administered, whereby healthy participants were placed in the center of a semi-circle that contained an array of speakers where the locations of nameable and non-nameable sounds were learned. In the visual spatial task, locations of pictures of abstract art intermixed with nameable objects were learned by presenting these items in specific locations on a computer screen. Participants took part in both the auditory and visual spatial tasks, which were counterbalanced for order and were learned at the same rate. Results showed that learning and memory for the spatial locations of nameable sounds and pictures was significantly better than for non-nameable stimuli. Interestingly, there was a cross-modal learning effect such that the auditory task facilitated learning of the visual task and vice versa. In conclusion, our results support the hypotheses that the semantic representation of items, as well as the presentation of items in different modalities, facilitate spatial learning and memory. PMID:21833283

  10. Tactile feedback improves auditory spatial localization.

    PubMed

    Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality. PMID:25368587

  11. Auditory learning: a developmental method.

    PubMed

    Zhang, Yilu; Weng, Juyang; Hwang, Wey-Shiuan

    2005-05-01

    Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers. PMID:15940990

  12. Spatial Coherence in Auditory Cortical Activity Fluctuations

    NASA Astrophysics Data System (ADS)

    Yoshida, Takamasa; Katura, Takusige; Yamazaki, Kyoko; Tanaka, Shigeru; Iwamoto, Mitsumasa; Tanaka, Naoki

    2007-07-01

    We examined activity fluctuations as ongoing and spontaneous activities that were recorded with voltage sensitive dye imaging in the auditory cortex of guinea pigs. We investigated whether such activities demonstrated spatial coherence, which represents the cortical functional organization. We used independent component analysis to extract neural activities from observed signals and a scaled signal-plus-noise model to estimate ongoing activities from the neural activities including response components. We mapped the correlation between the time courses in each channel and in the others for the whole observed region. Ongoing and spontaneous activities in the auditory cortex were found to have strong spatial coherence corresponding to the tonotopy, which is one of auditory functional organization.

  13. Auditory spatial attention representations in the human cerebral cortex.

    PubMed

    Kong, Lingqiang; Michalka, Samantha W; Rosen, Maya L; Sheremata, Summer L; Swisher, Jascha D; Shinn-Cunningham, Barbara G; Somers, David C

    2014-03-01

    Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753

  14. The plastic ear and perceptual relearning in auditory spatial perception

    PubMed Central

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497

  15. Transfer of Noncorresponding Spatial Associations to the Auditory Simon Task

    ERIC Educational Resources Information Center

    Proctor, Robert W.; Yamaguchi, Motonori; Vu, Kim-Phuong L.

    2007-01-01

    Four experiments examined transfer of noncorresponding spatial stimulus-response associations to an auditory Simon task for which stimulus location was irrelevant. Experiment 1 established that, for a horizontal auditory Simon task, transfer of spatial associations occurs after 300 trials of practice with an incompatible mapping of auditory

  16. Impairment of auditory spatial localization in congenitally blind human subjects.

    PubMed

    Gori, Monica; Sandini, Giulio; Martinoli, Cristina; Burr, David C

    2014-01-01

    Several studies have demonstrated enhanced auditory processing in the blind, suggesting that they compensate their visual impairment in part with greater sensitivity of the other senses. However, several physiological studies show that early visual deprivation can impact negatively on auditory spatial localization. Here we report for the first time severely impaired auditory localization in the congenitally blind: thresholds for spatially bisecting three consecutive, spatially-distributed sound sources were seriously compromised, on average 4.2-fold typical thresholds, and half performing at random. In agreement with previous studies, these subjects showed no deficits on simpler auditory spatial tasks or with auditory temporal bisection, suggesting that the encoding of Euclidean auditory relationships is specifically compromised in the congenitally blind. It points to the importance of visual experience in the construction and calibration of auditory spatial maps, with implications for rehabilitation strategies for the congenitally blind. PMID:24271326

  17. Impairment of auditory spatial localization in congenitally blind human subjects

    PubMed Central

    Gori, Monica; Sandini, Giulio; Martinoli, Cristina

    2014-01-01

    Several studies have demonstrated enhanced auditory processing in the blind, suggesting that they compensate their visual impairment in part with greater sensitivity of the other senses. However, several physiological studies show that early visual deprivation can impact negatively on auditory spatial localization. Here we report for the first time severely impaired auditory localization in the congenitally blind: thresholds for spatially bisecting three consecutive, spatially-distributed sound sources were seriously compromised, on average 4.2-fold typical thresholds, and half performing at random. In agreement with previous studies, these subjects showed no deficits on simpler auditory spatial tasks or with auditory temporal bisection, suggesting that the encoding of Euclidean auditory relationships is specifically compromised in the congenitally blind. It points to the importance of visual experience in the construction and calibration of auditory spatial maps, with implications for rehabilitation strategies for the congenitally blind. PMID:24271326

  18. Multichannel Spatial Auditory Display for Speed Communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Erbe, Tom

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplifiedhead-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degree azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degree azimuth positions.

  19. Multichannel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Erbe, T.; Wenzel, E. M. (Principal Investigator)

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  20. Sonic morphology: Aesthetic dimensional auditory spatial awareness

    NASA Astrophysics Data System (ADS)

    Whitehouse, Martha M.

    The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.

  1. Auditory spatial processing in Alzheimer’s disease

    PubMed Central

    Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer

  2. Auditory Learning Materials for Special Education: Catalog.

    ERIC Educational Resources Information Center

    Smith, Marsha C.; O'Connor, Phyllis

    The catalog (developed by the Great Lakes Region Special Education Instructional Materials Center) provides information on more than 100 auditory learning materials for use in special education. Described in the first section of the catalog are procedures used to evaluate and classify auditory instructional materials, including a list of…

  3. Auditory and visual spatial impression: Recent studies of three auditoria

    NASA Astrophysics Data System (ADS)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  4. Six Degrees of Auditory Spatial Separation.

    PubMed

    Carlile, Simon; Fox, Alex; Orchard-Mills, Emily; Leung, Johahn; Alais, David

    2016-06-01

    The location of a sound is derived computationally from acoustical cues rather than being inherent in the topography of the input signal, as in vision. Since Lord Rayleigh, the descriptions of that representation have swung between "labeled line" and "opponent process" models. Employing a simple variant of a two-point separation judgment using concurrent speech sounds, we found that spatial discrimination thresholds changed nonmonotonically as a function of the overall separation. Rather than increasing with separation, spatial discrimination thresholds first declined as two-point separation increased before reaching a turning point and increasing thereafter with further separation. This "dipper" function, with a minimum at 6 ° of separation, was seen for regions around the midline as well as for more lateral regions (30 and 45 °). The discrimination thresholds for the binaural localization cues were linear over the same range, so these cannot explain the shape of these functions. These data and a simple computational model indicate that the perception of auditory space involves a local code or multichannel mapping emerging subsequent to the binaural cue coding. PMID:27033087

  5. Visual task enhances spatial selectivity in the human auditory cortex.

    PubMed

    Salminen, Nelli H; Aho, Joanna; Sams, Mikko

    2013-01-01

    The auditory cortex represents spatial locations differently from other sensory modalities. While visual and tactile cortices utilize topographical space maps, for audition no such cortical map has been found. Instead, auditory cortical neurons have wide spatial receptive fields and together they form a population rate code of sound source location. Recent studies have shown that this code is modulated by task conditions so that during auditory tasks it provides better selectivity to sound source location than during idle listening. The goal of this study was to establish whether the neural representation of auditory space can also be influenced by task conditions involving other sensory modalities than hearing. Therefore, we conducted magnetoencephalography (MEG) recordings in which auditory spatial selectivity of the human cortex was probed with an adaptation paradigm while subjects performed a visual task. Engaging in the task led to an increase in neural selectivity to sound source location compared to when no task was performed. This suggests that an enhancement in the population rate code of auditory space took place during task performance. This enhancement in auditory spatial selectivity was independent of the direction of visual orientation. Together with previous studies, these findings suggest that performing any demanding task, even one in which sounds and their source locations are irrelevant, can lead to enhancements in the neural representation of auditory space. Such mechanisms may have great survival value as sounds are capable of producing location information on potentially relevant events in all directions and over long distances. PMID:23543781

  6. Auditory and motor imagery modulate learning in music performance.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  7. Auditory and motor imagery modulate learning in music performance

    PubMed Central

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  8. Perceptual Learning In The Developing Auditory Cortex

    PubMed Central

    Bao, Shaowen

    2015-01-01

    A hallmark of the developing auditory cortex is the heightened plasticity in the critical period, during which acoustic inputs can indelibly alter cortical function. However, not all sounds in the natural acoustic environment are ethologically relevant. How does the auditory system resolve relevant sounds from the acoustic environment in such an early developmental stage when most associative learning mechanisms are not yet fully functional? What can the auditory system learn from one of the most important classes of sounds—animal vocalizations? How does naturalistic acoustic experience shape cortical sound representation and perception? To answer these questions, we need to consider an unusual strategy—statistical learning—where what the system needs to learn is embedded in the sensory input. Here, I will review recent findings on how certain statistical structure of natural animal vocalizations shapes auditory cortical acoustic representations, and how cortical plasticity may underlie learned categorical sound perception. These results will be discussed in the context of human speech perception. PMID:25728188

  9. Negative emotion provides cues for orienting auditory spatial attention.

    PubMed

    Asutay, Erkin; Västfjäll, Daniel

    2015-01-01

    The auditory stimuli provide information about the objects and events around us. They can also carry biologically significant emotional information (such as unseen dangers and conspecific vocalizations), which provides cues for allocation of attention and mental resources. Here, we investigated whether task-irrelevant auditory emotional information can provide cues for orientation of auditory spatial attention. We employed a covert spatial orienting task: the dot-probe task. In each trial, two task-irrelevant auditory cues were simultaneously presented at two separate locations (left-right or front-back). Environmental sounds were selected to form emotional vs. neutral, emotional vs. emotional, and neutral vs. neutral cue pairs. The participants' task was to detect the location of an acoustic target that was presented immediately after the task-irrelevant auditory cues. The target was presented at the same location as one of the auditory cues. The results indicated that participants were significantly faster to locate the target when it replaced the negative cue compared to when it replaced the neutral cue. The positive cues did not produce a clear attentional bias. Further, same valence pairs (emotional-emotional or neutral-neutral) did not modulate reaction times due to a lack of spatial attention capture by one cue in the pair. Taken together, the results indicate that negative affect can provide cues for the orientation of spatial attention in the auditory domain. PMID:26029149

  10. Negative emotion provides cues for orienting auditory spatial attention

    PubMed Central

    Asutay, Erkin; Västfjäll, Daniel

    2015-01-01

    The auditory stimuli provide information about the objects and events around us. They can also carry biologically significant emotional information (such as unseen dangers and conspecific vocalizations), which provides cues for allocation of attention and mental resources. Here, we investigated whether task-irrelevant auditory emotional information can provide cues for orientation of auditory spatial attention. We employed a covert spatial orienting task: the dot-probe task. In each trial, two task-irrelevant auditory cues were simultaneously presented at two separate locations (left–right or front–back). Environmental sounds were selected to form emotional vs. neutral, emotional vs. emotional, and neutral vs. neutral cue pairs. The participants’ task was to detect the location of an acoustic target that was presented immediately after the task-irrelevant auditory cues. The target was presented at the same location as one of the auditory cues. The results indicated that participants were significantly faster to locate the target when it replaced the negative cue compared to when it replaced the neutral cue. The positive cues did not produce a clear attentional bias. Further, same valence pairs (emotional–emotional or neutral–neutral) did not modulate reaction times due to a lack of spatial attention capture by one cue in the pair. Taken together, the results indicate that negative affect can provide cues for the orientation of spatial attention in the auditory domain. PMID:26029149

  11. Covert Auditory Spatial Orienting: An Evaluation of the Spatial Relevance Hypothesis

    ERIC Educational Resources Information Center

    Roberts, Katherine L.; Summerfield, A. Quentin; Hall, Deborah A.

    2009-01-01

    The spatial relevance hypothesis (J. J. McDonald & L. M. Ward, 1999) proposes that covert auditory spatial orienting can only be beneficial to auditory processing when task stimuli are encoded spatially. We present a series of experiments that evaluate 2 key aspects of the hypothesis: (a) that "reflexive activation of location-sensitive neurons is…

  12. From ear to body: the auditory-motor loop in spatial cognition

    PubMed Central

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space. PMID:25249933

  13. From ear to body: the auditory-motor loop in spatial cognition.

    PubMed

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    SPATIAL MEMORY IS MAINLY STUDIED THROUGH THE VISUAL SENSORY MODALITY: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space. PMID:25249933

  14. Early auditory enrichment with music enhances auditory discrimination learning and alters NR2B protein expression in rat auditory cortex.

    PubMed

    Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde

    2009-01-01

    Previous studies have shown that the functional development of auditory system is substantially influenced by the structure of environmental acoustic inputs in early life. In our present study, we investigated the effects of early auditory enrichment with music on rat auditory discrimination learning. We found that early auditory enrichment with music from postnatal day (PND) 14 enhanced learning ability in auditory signal-detection task and in sound duration-discrimination task. In parallel, a significant increase was noted in NMDA receptor subunit NR2B protein expression in the auditory cortex. Furthermore, we found that auditory enrichment with music starting from PND 28 or 56 did not influence NR2B expression in the auditory cortex. No difference was found in the NR2B expression in the inferior colliculus (IC) between music-exposed and normal rats, regardless of when the auditory enrichment with music was initiated. Our findings suggest that early auditory enrichment with music influences NMDA-mediated neural plasticity, which results in enhanced auditory discrimination learning. PMID:18706452

  15. Auditory spatial localization: Developmental delay in children with visual impairments.

    PubMed

    Cappagli, Giulia; Gori, Monica

    2016-01-01

    For individuals with visual impairments, auditory spatial localization is one of the most important features to navigate in the environment. Many works suggest that blind adults show similar or even enhanced performance for localization of auditory cues compared to sighted adults (Collignon, Voss, Lassonde, & Lepore, 2009). To date, the investigation of auditory spatial localization in children with visual impairments has provided contrasting results. Here we report, for the first time, that contrary to visually impaired adults, children with low vision or total blindness show a significant impairment in the localization of static sounds. These results suggest that simple auditory spatial tasks are compromised in children, and that this capacity recovers over time. PMID:27002960

  16. Auditory plasticity and speech motor learning

    PubMed Central

    Nasir, Sazzad M.; Ostry, David J.

    2009-01-01

    Is plasticity in sensory and motor systems linked? Here, in the context of speech motor learning and perception, we test the idea sensory function is modified by motor learning and, in particular, that speech motor learning affects a speaker's auditory map. We assessed speech motor learning by using a robotic device that displaced the jaw and selectively altered somatosensory feedback during speech. We found that with practice speakers progressively corrected for the mechanical perturbation and after motor learning they also showed systematic changes in their perceptual classification of speech sounds. The perceptual shift was tied to motor learning. Individuals that displayed greater amounts of learning also showed greater perceptual change. Perceptual change was not observed in control subjects that produced the same movements, but in the absence of a force field, nor in subjects that experienced the force field but failed to adapt to the mechanical load. The perceptual effects observed here indicate the involvement of the somatosensory system in the neural processing of speech sounds and suggest that speech motor learning results in changes to auditory perceptual function. PMID:19884506

  17. Auditory Discrimination Learning: Role of Working Memory

    PubMed Central

    Zhang, Yu-Xuan; Moore, David R.; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal

    2016-01-01

    Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience. PMID:26799068

  18. Auditory Discrimination Learning: Role of Working Memory.

    PubMed

    Zhang, Yu-Xuan; Moore, David R; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal

    2016-01-01

    Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience. PMID:26799068

  19. Spatial processing in the auditory cortex of the macaque monkey

    NASA Astrophysics Data System (ADS)

    Recanzone, Gregg H.

    2000-10-01

    The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.

  20. Cortical Synaptic Inhibition Declines during Auditory Learning

    PubMed Central

    von Trapp, Gardiner; Mowery, Todd M.; Kotak, Vibhakar C.; Sanes, Dan H.

    2015-01-01

    Auditory learning is associated with an enhanced representation of acoustic cues in primary auditory cortex, and modulation of inhibitory strength is causally involved in learning. If this inhibitory plasticity is associated with task learning and improvement, its expression should emerge and persist until task proficiency is achieved. We tested this idea by measuring changes to cortical inhibitory synaptic transmission as adult gerbils progressed through the process of associative learning and perceptual improvement. Using either of two procedures, aversive or appetitive conditioning, animals were trained to detect amplitude-modulated noise and then tested daily. Following each training session, a thalamocortical brain slice was generated, and inhibitory synaptic properties were recorded from layer 2/3 pyramidal neurons. Initial associative learning was accompanied by a profound reduction in the amplitude of spontaneous IPSCs (sIPSCs). However, sIPSC amplitude returned to control levels when animals reached asymptotic behavioral performance. In contrast, paired-pulse ratios decreased in trained animals as well as in control animals that experienced unpaired conditioned and unconditioned stimuli. This latter observation suggests that inhibitory release properties are modified during behavioral conditioning, even when an association between the sound and reinforcement cannot occur. These results suggest that associative learning is accompanied by a reduction of postsynaptic inhibitory strength that persists for several days during learning and perceptual improvement. PMID:25904785

  1. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1993-01-01

    A spatial auditory display was used to convolve speech stimuli, consisting of 130 different call signs used in the communications protocol of NASA's John F. Kennedy Space Center, to different virtual auditory positions. An adaptive staircase method was used to determine intelligibility levels of the signal against diotic speech babble, with spatial positions at 30 deg azimuth increments. Non-individualized, minimum-phase approximations of head-related transfer functions were used. The results showed a maximal intelligibility improvement of about 6 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  2. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1994-01-01

    A spatial auditory display was designed for separating the multiple communication channels usually heard over one ear to different virtual auditory positions. The single 19 foot rack mount device utilizes digital filtering algorithms to separate up to four communication channels. The filters use four different binaural transfer functions, synthesized from actual outer ear measurements, to impose localization cues on the incoming sound. Hardware design features include 'fail-safe' operation in the case of power loss, and microphone/headset interfaces to the mobile launch communication system in use at KSC. An experiment designed to verify the intelligibility advantage of the display used 130 different call signs taken from the communications protocol used at NASA KSC. A 6 to 7 dB intelligibility advantage was found when multiple channels were spatially displayed, compared to monaural listening. The findings suggest that the use of a spatial auditory display could enhance both occupational and operational safety and efficiency of NASA operations.

  3. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    PubMed Central

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  4. Computational Characterization of Visually Induced Auditory Spatial Adaptation

    PubMed Central

    Wozny, David R.; Shams, Ladan

    2011-01-01

    Recent research investigating the principles governing human perception has provided increasing evidence for probabilistic inference in human perception. For example, human auditory and visual localization judgments closely resemble that of a Bayesian causal inference observer, where the underlying causal structure of the stimuli are inferred based on both the available sensory evidence and prior knowledge. However, most previous studies have focused on characterization of perceptual inference within a static environment, and therefore, little is known about how this inference process changes when observers are exposed to a new environment. In this study we aimed to computationally characterize the change in auditory spatial perception induced by repeated auditory–visual spatial conflict, known as the ventriloquist aftereffect. In theory, this change could reflect a shift in the auditory sensory representations (i.e., shift in auditory likelihood distribution), a decrease in the precision of the auditory estimates (i.e., increase in spread of likelihood distribution), a shift in the auditory bias (i.e., shift in prior distribution), or an increase/decrease in strength of the auditory bias (i.e., the spread of prior distribution), or a combination of these. By quantitatively estimating the parameters of the perceptual process for each individual observer using a Bayesian causal inference model, we found that the shift in the perceived locations after exposure was associated with a shift in the mean of the auditory likelihood functions in the direction of the experienced visual offset. The results suggest that repeated exposure to a fixed auditory–visual discrepancy is attributed by the nervous system to sensory representation error and as a result, the sensory map of space is recalibrated to correct the error. PMID:22069383

  5. Spatial and nonspatial peripheral auditory processing in congenitally blind people.

    PubMed

    Chen, Qi; Zhang, Ming; Zhou, Xiaolin

    2006-09-18

    Congenitally blind adults' performance in spatial and nonspatial peripheral auditory attention tasks was compared with that of sighted adults in a paradigm manipulating location-based and frequency-based inhibition of return concurrently. Blind study participants responded faster in spatial attention tasks (detection/localization) and slower in the nonspatial frequency discrimination task than sighted participants. Both groups, however, showed the same patterns of interaction between location-based and frequency-based inhibition of return. These results suggest that early vision deprivation enhances the function of the posterior-dorsal auditory 'where' pathway but impairs the function of the anterior-ventral 'what' pathway during peripheral auditory attention. The altered processing speed in the blind, however, is not accompanied by alteration in attentional orienting mechanisms that may be localized to higher cortices. PMID:16932156

  6. Sensorimotor Learning Enhances Expectations During Auditory Perception.

    PubMed

    Mathias, Brian; Palmer, Caroline; Perrin, Fabien; Tillmann, Barbara

    2015-08-01

    Sounds that have been produced with one's own motor system tend to be remembered better than sounds that have only been perceived, suggesting a role of motor information in memory for auditory stimuli. To address potential contributions of the motor network to the recognition of previously produced sounds, we used event-related potential, electric current density, and behavioral measures to investigate memory for produced and perceived melodies. Musicians performed or listened to novel melodies, and then heard the melodies either in their original version or with single pitch alterations. Production learning enhanced subsequent recognition accuracy and increased amplitudes of N200, P300, and N400 responses to pitch alterations. Premotor and supplementary motor regions showed greater current density during the initial detection of alterations in previously produced melodies than in previously perceived melodies, associated with the N200. Primary motor cortex was more strongly engaged by alterations in previously produced melodies within the P300 and N400 timeframes. Motor memory traces may therefore interface with auditory pitch percepts in premotor regions as early as 200 ms following perceived pitch onsets. Outcomes suggest that auditory-motor interactions contribute to memory benefits conferred by production experience, and support a role of motor prediction mechanisms in the production effect. PMID:24621528

  7. Integrated processing of spatial cues in human auditory cortex.

    PubMed

    Salminen, Nelli H; Takanen, Marko; Santala, Olli; Lamminsalo, Jarkko; Altoè, Alessandro; Pulkki, Ville

    2015-09-01

    Human sound source localization relies on acoustical cues, most importantly, the interaural differences in time and level (ITD and ILD). For reaching a unified representation of auditory space the auditory nervous system needs to combine the information provided by these two cues. In search for such a unified representation, we conducted a magnetoencephalography (MEG) experiment that took advantage of the location-specific adaptation of the auditory cortical N1 response. In general, the attenuation caused by a preceding adaptor sound to the response elicited by a probe depends on their spatial arrangement: if the two sounds coincide, adaptation is stronger than when the locations differ. Here, we presented adaptor-probe pairs that contained different localization cues, for instance, adaptors with ITD and probes with ILD. We found that the adaptation of the N1 amplitude was location-specific across localization cues. This result can be explained by the existence of auditory cortical neurons that are sensitive to sound source location independent on which cue, ITD or ILD, provides the location information. Such neurons would form a cue-independent, unified representation of auditory space in human auditory cortex. PMID:26074304

  8. Emergence of Spatial Stream Segregation in the Ascending Auditory Pathway

    PubMed Central

    Yao, Justin D.; Bremen, Peter

    2015-01-01

    Stream segregation enables a listener to disentangle multiple competing sequences of sounds. A recent study from our laboratory demonstrated that cortical neurons in anesthetized cats exhibit spatial stream segregation (SSS) by synchronizing preferentially to one of two sequences of noise bursts that alternate between two source locations. Here, we examine the emergence of SSS along the ascending auditory pathway. Extracellular recordings were made in anesthetized rats from the inferior colliculus (IC), the nucleus of the brachium of the IC (BIN), the medial geniculate body (MGB), and the primary auditory cortex (A1). Stimuli consisted of interleaved sequences of broadband noise bursts that alternated between two source locations. At stimulus presentation rates of 5 and 10 bursts per second, at which human listeners report robust SSS, neural SSS is weak in the central nucleus of the IC (ICC), it appears in the nucleus of the brachium of the IC (BIN) and in approximately two-thirds of neurons in the ventral MGB (MGBv), and is prominent throughout A1. The enhancement of SSS at the cortical level reflects both increased spatial sensitivity and increased forward suppression. We demonstrate that forward suppression in A1 does not result from synaptic inhibition at the cortical level. Instead, forward suppression might reflect synaptic depression in the thalamocortical projection. Together, our findings indicate that auditory streams are increasingly segregated along the ascending auditory pathway as distinct mutually synchronized neural populations. SIGNIFICANCE STATEMENT Listeners are capable of disentangling multiple competing sequences of sounds that originate from distinct sources. This stream segregation is aided by differences in spatial location between the sources. A possible substrate of spatial stream segregation (SSS) has been described in the auditory cortex, but the mechanisms leading to those cortical responses are unknown. Here, we investigated SSS in

  9. Consortium on Auditory Learning Materials for the Handicapped: Cumulative Papers.

    ERIC Educational Resources Information Center

    Michigan State Univ., East Lansing. Instructional Media Center.

    Presented is information generated from a Consortium on Auditory Learning Materials for the Handicapped. A list of consortium members and a glossary of 35 terms related to auditory learning are provided in Sections 1 and 2. Section 3 is a chart of projected goals (such as participating in teacher conferences) of the 12 consortium member units…

  10. Bridging the Gap Between Materials and Learners: Maximizing Auditory Instruction. Auditory Learning Monograph Series 5.

    ERIC Educational Resources Information Center

    Carlson, Nancy A.; And Others

    Described is a system (created by the Great Lakes Region Special Education Instructional Materials Center) for classifying auditory learners and matching them to appropriate auditory learning experiences. The learner classification system outlined utilizes an organizational table that accommodates five learner variables (mental age, chronological…

  11. Evidence for auditory feature integration with spatially distributed items.

    PubMed

    Hall, M D; Pastore, R E; Acker, B E; Huang, W

    2000-08-01

    Recent auditory research using sequentially presented, spatially fixed tones has found evidence that, as in vision for simultaneous, spatially distributed objects, attention appears to be important for the integration of perceptual features that enable the identification of auditory events. The present investigation extended these findings to arrays of simultaneously presented, spatially distributed musical tones. In the primary tasks, listeners were required to search for specific cued conjunctions of values for the features of pitch and instrument timbre. In secondary tasks, listeners were required to search for a single cued value of either the pitch or the timbre feature. In the primary tasks, listeners made frequent errors in reporting the presence or absence of target conjunctions. Probability modeling, derived from the visual search literature, revealed that the error rates in the primary tasks reflected the relatively infrequent failure to correctly identify pitch or timbre features, plus the far more frequent illusory conjunction of separately presented pitch and timbre features. Estimates of illusory conjunction rate ranged from 23% to 40%. Thus, a process must exist in audition that integrates separately registered features. The implications of the results for the processing of isolated auditory features, as well as auditory events defined by conjunctions of features, are discussed. PMID:11019620

  12. The cortical dynamics underlying effective switching of auditory spatial attention

    PubMed Central

    Larson, Eric; Lee, Adrian KC

    2012-01-01

    Successful rapid deployment of attention to relevant sensory stimuli is critical for survival. In a complex environment, attention can be captured by salient events or be deployed volitionally. Furthermore, when multiple events are of interest concurrently, effective interaction with one's surroundings hinges on efficient top-down control of shifting attention. It has been hypothesized that two separate cortical networks coordinate attention shifts across multiple modalities. However, the cortical dynamics of these networks and their behavioral relevance to switching of auditory attention are unknown. Here we show that the strength of each subject's right temporal-parietal junction (RTPJ, part of the ventral network) activation was highly correlated with their behavioral performance in an auditory task. We also provide evidence that the recruitment of the RTPJ likely precedes the right frontal eye fields (FEF; participating in both the dorsal and ventral networks) and middle frontal gyrus (MFG) by around 100 ms when subjects switch their auditory spatial attention. PMID:22974974

  13. The Role of Auditory Cues in the Spatial Knowledge of Blind Individuals

    ERIC Educational Resources Information Center

    Papadopoulos, Konstantinos; Papadimitriou, Kimon; Koutsoklenis, Athanasios

    2012-01-01

    The study presented here sought to explore the role of auditory cues in the spatial knowledge of blind individuals by examining the relation between the perceived auditory cues and the landscape of a given area and by investigating how blind individuals use auditory cues to create cognitive maps. The findings reveal that several auditory cues…

  14. Listener orientation and spatial judgments of elevated auditory percepts

    NASA Astrophysics Data System (ADS)

    Parks, Anthony J.

    How do listener head rotations affect auditory perception of elevation? This investi-. gation addresses this in the hopes that perceptual judgments of elevated auditory. percepts may be more thoroughly understood in terms of dynamic listening cues. engendered by listener head rotations and that this phenomenon can be psychophys-. ically and computationally modeled. Two listening tests were conducted and a. psychophysical model was constructed to this end. The frst listening test prompted. listeners to detect an elevated auditory event produced by a virtual noise source. orbiting the median plane via 24-channel ambisonic spatialization. Head rotations. were tracked using computer vision algorithms facilitated by camera tracking. The. data were used to construct a dichotomous criteria model using factorial binary. logistic regression model. The second auditory test investigated the validity of the. historically supported frequency dependence of auditory elevation perception using. narrow-band noise for continuous and brief stimuli with fxed and free-head rotation. conditions. The data were used to construct a multinomial logistic regression model. to predict categorical judgments of above, below, and behind. Finally, in light. of the psychophysical data found from the above studies, a functional model of. elevation perception for point sources along the cone of confusion was constructed. using physiologically-inspired signal processing methods along with top-down pro-. cessing utilizing principles of memory and orientation. The model is evaluated using. white noise bursts for 42 subjects' head-related transfer functions. The investigation. concludes with study limitations, possible implications, and speculation on future. research trajectories.

  15. Spatial organization of tettigoniid auditory receptors: insights from neuronal tracing.

    PubMed

    Strauß, Johannes; Lehmann, Gerlind U C; Lehmann, Arne W; Lakes-Harlan, Reinhard

    2012-11-01

    The auditory sense organ of Tettigoniidae (Insecta, Orthoptera) is located in the foreleg tibia and consists of scolopidial sensilla which form a row termed crista acustica. The crista acustica is associated with the tympana and the auditory trachea. This ear is a highly ordered, tonotopic sensory system. As the neuroanatomy of the crista acustica has been documented for several species, the most distal somata and dendrites of receptor neurons have occasionally been described as forming an alternating or double row. We investigate the spatial arrangement of receptor cell bodies and dendrites by retrograde tracing with cobalt chloride solution. In six tettigoniid species studied, distal receptor neurons are consistently arranged in double-rows of somata rather than a linear sequence. This arrangement of neurons is shown to affect 30-50% of the overall auditory receptors. No strict correlation of somata positions between the anterio-posterior and dorso-ventral axis was evident within the distal crista acustica. Dendrites of distal receptors occasionally also occur in a double row or are even massed without clear order. Thus, a substantial part of auditory receptors can deviate from a strictly straight organization into a more complex morphology. The linear organization of dendrites is not a morphological criterion that allows hearing organs to be distinguished from nonhearing sense organs serially homologous to ears in all species. Both the crowded arrangement of receptor somata and dendrites may result from functional constraints relating to frequency discrimination, or from developmental constraints of auditory morphogenesis in postembryonic development. PMID:22807283

  16. Auditory Cortical Plasticity in Learning to Discriminate Modulation Rate

    PubMed Central

    van Wassenhove, Virginie; Nagarajan, Srikantan S.

    2014-01-01

    The discrimination of temporal information in acoustic inputs is a crucial aspect of auditory perception, yet very few studies have focused on auditory perceptual learning of timing properties and associated plasticity in adult auditory cortex. Here, we trained participants on a temporal discrimination task. The main task used a base stimulus (four tones separated by intervals of 200 ms) that had to be distinguished from a target stimulus (four tones with intervals down to ~180 ms). We show that participants’ auditory temporal sensitivity improves with a short amount of training (3 d, 1 h/d). Learning to discriminate temporal modulation rates was accompanied by a systematic amplitude increase of the early auditory evoked responses to trained stimuli, as measured by magnetoencephalography. Additionally, learning and auditory cortex plasticity partially generalized to interval discrimination but not to frequency discrimination. Auditory cortex plasticity associated with short-term perceptual learning was manifested as an enhancement of auditory cortical responses to trained acoustic features only in the trained task. Plasticity was also manifested as induced non-phase–locked high gamma-band power increases in inferior frontal cortex during performance in the trained task. Functional plasticity in auditory cortex is here interpreted as the product of bottom-up and top-down modulations. PMID:17344404

  17. Auditory feedback in error-based learning of motor regularity.

    PubMed

    van Vugt, Floris T; Tillmann, Barbara

    2015-05-01

    Music and speech are skills that require high temporal precision of motor output. A key question is how humans achieve this timing precision given the poor temporal resolution of somatosensory feedback, which is classically considered to drive motor learning. We hypothesise that auditory feedback critically contributes to learn timing, and that, similarly to visuo-spatial learning models, learning proceeds by correcting a proportion of perceived timing errors. Thirty-six participants learned to tap a sequence regularly in time. For participants in the synchronous-sound group, a tone was presented simultaneously with every keystroke. For the jittered-sound group, the tone was presented after a random delay of 10-190 ms following the keystroke, thus degrading the temporal information that the sound provided about the movement. For the mute group, no keystroke-triggered sound was presented. In line with the model predictions, participants in the synchronous-sound group were able to improve tapping regularity, whereas the jittered-sound and mute group were not. The improved tapping regularity of the synchronous-sound group also transferred to a novel sequence and was maintained when sound was subsequently removed. The present findings provide evidence that humans engage in auditory feedback error-based learning to improve movement quality (here reduce variability in sequence tapping). We thus elucidate the mechanism by which high temporal precision of movement can be achieved through sound in a way that may not be possible with less temporally precise somatosensory modalities. Furthermore, the finding that sound-supported learning generalises to novel sequences suggests potential rehabilitation applications. PMID:25721795

  18. Natural auditory scene statistics shapes human spatial hearing

    PubMed Central

    Parise, Cesare V.; Knorre, Katharina; Ernst, Marc O.

    2014-01-01

    Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing. PMID:24711409

  19. Natural auditory scene statistics shapes human spatial hearing.

    PubMed

    Parise, Cesare V; Knorre, Katharina; Ernst, Marc O

    2014-04-22

    Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing. PMID:24711409

  20. The effect of real-time auditory feedback on learning new characters.

    PubMed

    Danna, Jérémy; Fontaine, Maureen; Paz-Villagrán, Vietminh; Gondre, Charles; Thoret, Etienne; Aramaki, Mitsuko; Kronland-Martinet, Richard; Ystad, Sølvi; Velay, Jean-Luc

    2015-10-01

    The present study investigated the effect of handwriting sonification on graphomotor learning. Thirty-two adults, distributed in two groups, learned four new characters with their non-dominant hand. The experimental design included a pre-test, a training session, and two post-tests, one just after the training sessions and another 24h later. Two characters were learned with and two without real-time auditory feedback (FB). The first group first learned the two non-sonified characters and then the two sonified characters whereas the reverse order was adopted for the second group. Results revealed that auditory FB improved the speed and fluency of handwriting movements but reduced, in the short-term only, the spatial accuracy of the trace. Transforming kinematic variables into sounds allows the writer to perceive his/her movement in addition to the written trace and this might facilitate handwriting learning. However, there were no differential effects of auditory FB, neither long-term nor short-term for the subjects who first learned the characters with auditory FB. We hypothesize that the positive effect on the handwriting kinematics was transferred to characters learned without FB. This transfer effect of the auditory FB is discussed in light of the Theory of Event Coding. PMID:25533208

  1. Auditory Processing Learning Disability, Suicidal Ideation, and Transformational Faith

    ERIC Educational Resources Information Center

    Bailey, Frank S.; Yocum, Russell G.

    2015-01-01

    The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…

  2. Does visual experience influence the spatial distribution of auditory attention?

    PubMed

    Lerens, Elodie; Renier, Laurent

    2014-02-01

    Sighted individuals are less accurate and slower to localize sounds coming from the peripheral space than sounds coming from the frontal space. This specific bias in favour of the frontal auditory space seems reduced in early blind individuals, who are particularly better than sighted individuals at localizing sounds coming from the peripheral space. Currently, it is not clear to what extent this bias in the auditory space is a general phenomenon or if it applies only to spatial processing (i.e. sound localization). In our approach we compared the performance of early blind participants with that of sighted subjects during a frequency discrimination task with sounds originating either from frontal or peripheral locations. Results showed that early blind participants discriminated faster than sighted subjects both peripheral and frontal sounds. In addition, sighted subjects were faster at discriminating frontal sounds than peripheral ones, whereas early blind participants showed equal discrimination speed for frontal and peripheral sounds. We conclude that the spatial bias observed in sighted subjects reflects an unbalance in the spatial distribution of auditory attention resources that is induced by visual experience. PMID:24378238

  3. Transformation of spatial sensitivity along the ascending auditory pathway

    PubMed Central

    Yao, Justin D.; Bremen, Peter

    2015-01-01

    Locations of sounds are computed in the central auditory pathway based primarily on differences in sound level and timing at the two ears. In rats, the results of that computation appear in the primary auditory cortex (A1) as exclusively contralateral hemifield spatial sensitivity, with strong responses to sounds contralateral to the recording site, sharp cutoffs across the midline, and weak, sound-level-tolerant responses to ipsilateral sounds. We surveyed the auditory pathway in anesthetized rats to identify the brain level(s) at which level-tolerant spatial sensitivity arises. Noise-burst stimuli were varied in horizontal sound location and in sound level. Neurons in the central nucleus of the inferior colliculus (ICc) displayed contralateral tuning at low sound levels, but tuning was degraded at successively higher sound levels. In contrast, neurons in the nucleus of the brachium of the inferior colliculus (BIN) showed sharp, level-tolerant spatial sensitivity. The ventral division of the medial geniculate body (MGBv) contained two discrete neural populations, one showing broad sensitivity like the ICc and one showing sharp sensitivity like A1. Dorsal, medial, and shell regions of the MGB showed fairly sharp spatial sensitivity, likely reflecting inputs from A1 and/or the BIN. The results demonstrate two parallel brainstem pathways for spatial hearing. The tectal pathway, in which sharp, level-tolerant spatial sensitivity arises between ICc and BIN, projects to the superior colliculus and could support reflexive orientation to sounds. The lemniscal pathway, in which such sensitivity arises between ICc and the MGBv, projects to the forebrain to support perception of sound location. PMID:25744891

  4. Auditory spatial perception dynamically realigns with changing eye position.

    PubMed

    Razavi, Babak; O'Neill, William E; Paige, Gary D

    2007-09-19

    Audition and vision both form spatial maps of the environment in the brain, and their congruency requires alignment and calibration. Because audition is referenced to the head and vision is referenced to movable eyes, the brain must accurately account for eye position to maintain alignment between the two modalities as well as perceptual space constancy. Changes in eye position are known to variably, but inconsistently, shift sound localization, suggesting subtle shortcomings in the accuracy or use of eye position signals. We systematically and directly quantified sound localization across a broad spatial range and over time after changes in eye position. A sustained fixation task addressed the spatial (steady-state) attributes of eye position-dependent effects on sound localization. Subjects continuously fixated visual reference spots straight ahead (center), to the left (20 degrees), or to the right (20 degrees) of the midline in separate sessions while localizing auditory targets using a laser pointer guided by peripheral vision. An alternating fixation task focused on the temporal (dynamic) aspects of auditory spatial shifts after changes in eye position. Localization proceeded as in sustained fixation, except that eye position alternated between the three fixation references over multiple epochs, each lasting minutes. Auditory space shifted by approximately 40% toward the new eye position and dynamically over several minutes. We propose that this spatial shift reflects an adaptation mechanism for aligning the "straight-ahead" of perceived sensory-motor maps, particularly during early childhood when normal ocular alignment is achieved, but also resolving challenges to normal spatial perception throughout life. PMID:17881531

  5. Auditory-Perceptual Learning Improves Speech Motor Adaptation in Children

    PubMed Central

    Shiller, Douglas M.; Rochon, Marie-Lyne

    2015-01-01

    Auditory feedback plays an important role in children’s speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback, however it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5–7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children’s ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation. PMID:24842067

  6. Quadri-stability of a spatially ambiguous auditory illusion

    PubMed Central

    Bainbridge, Constance M.; Bainbridge, Wilma A.; Oliva, Aude

    2014-01-01

    In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front vs. behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or “bouncing” to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual awareness. PMID

  7. Multi-channel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand; Erbe, Tom

    1993-01-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  8. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning

    PubMed Central

    Strait, Dana L.; Kraus, Nina

    2013-01-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians’ subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model by which to study mechanisms of experience-dependent changes in auditory function in humans. PMID:23988583

  9. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning.

    PubMed

    Strait, Dana L; Kraus, Nina

    2014-02-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians' subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model in which to study mechanisms of experience-dependent changes in human auditory function. PMID:23988583

  10. A lateralized auditory evoked potential elicited when auditory objects are defined by spatial motion.

    PubMed

    Butcher, Andrew; Govenlock, Stanley W; Tata, Matthew S

    2011-02-01

    Scene analysis involves the process of segmenting a field of overlapping objects from each other and from the background. It is a fundamental stage of perception in both vision and hearing. The auditory system encodes complex cues that allow listeners to find boundaries between sequential objects, even when no gap of silence exists between them. In this sense, object perception in hearing is similar to perceiving visual objects defined by isoluminant color, motion or binocular disparity. Motion is one such cue: when a moving sound abruptly disappears from one location and instantly reappears somewhere else, the listener perceives two sequential auditory objects. Smooth reversals of motion direction do not produce this segmentation. We investigated the brain electrical responses evoked by this spatial segmentation cue and compared them to the familiar auditory evoked potential elicited by sound onsets. Segmentation events evoke a pattern of negative and positive deflections that are unlike those evoked by onsets. We identified a negative component in the waveform - the Lateralized Object-Related Negativity - generated by the hemisphere contralateral to the side on which the new sound appears. The relationship between this component and similar components found in related paradigms is considered. PMID:21056097

  11. Level dependence of spatial processing in the primate auditory cortex

    PubMed Central

    Wang, Xiaoqin

    2012-01-01

    Sound localization in both humans and monkeys is tolerant to changes in sound levels. The underlying neural mechanism, however, is not well understood. This study reports the level dependence of individual neurons' spatial receptive fields (SRFs) in the primary auditory cortex (A1) and the adjacent caudal field in awake marmoset monkeys. We found that most neurons' excitatory SRF components were spatially confined in response to broadband noise stimuli delivered from the upper frontal sound field. Approximately half the recorded neurons exhibited little change in spatial tuning width over a ∼20-dB change in sound level, whereas the remaining neurons showed either expansion or contraction in their tuning widths. Increased sound levels did not alter the percent distribution of tuning width for neurons collected in either cortical field. The population-averaged responses remained tuned between 30- and 80-dB sound pressure levels for neuronal groups preferring contralateral, midline, and ipsilateral locations. We further investigated the spatial extent and level dependence of the suppressive component of SRFs using a pair of sequentially presented stimuli. Forward suppression was observed when the stimuli were delivered from “far” locations, distant to the excitatory center of an SRF. In contrast to spatially confined excitation, the strength of suppression typically increased with stimulus level at both the excitatory center and far regions of an SRF. These findings indicate that although the spatial tuning of individual neurons varied with stimulus levels, their ensemble responses were level tolerant. Widespread spatial suppression may play an important role in limiting the sizes of SRFs at high sound levels in the auditory cortex. PMID:22592309

  12. Auditory perceptual learning and the cochlear implant.

    PubMed

    Watson, C S

    1991-01-01

    Recent studies of the perception of complex nonspeech sounds have shown that individual parts of spectral-temporal waveforms become more salient through experience or selective training. One consequence of this is that the same sound can be perceived quite differently, depending on prior experience and also on the listener's expectations. The time course of learning to identify initially unfamiliar speech sounds by normal-hearing listeners has been shown to extend over many months, possibly even years, before reaching a stable high level of performance. If implant users or users of other types of speech aids (acoustic hearing aids, vibrotactile aids) are required to learn to interpret sounds as unfamiliar as the nonspeech sounds used in research are to normal listeners, similar lengthy experience should be required for them to achieve maximum performance. Why this has not been the case in some clinical studies is a puzzle. A possible explanation is that speech perception strongly depends on abilities other than sensory acuity or resolving power. Studies of individual differences in auditory temporal and spectral acuity have not shown strong correlations between individual differences in those abilities and individual differences in speech perception. It is argued that one way to interpret this observation is that differences in the ability to hear speech (as by two hearing-impaired listeners with the same audiogram) may be the result of central differences in the ability to infer an original stimulus from its fragments; this ability may be independent of the sensory modality in which the information is presented. PMID:2069193

  13. Auditory Spatial Coding Flexibly Recruits Anterior, but Not Posterior, Visuotopic Parietal Cortex.

    PubMed

    Michalka, Samantha W; Rosen, Maya L; Kong, Lingqiang; Shinn-Cunningham, Barbara G; Somers, David C

    2016-03-01

    Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during auditory spatial attention and working memory tasks, but prior work has not demonstrated that auditory activation occurs within visual spatial maps in parietal cortex. Here, we report both cognitive and anatomical distinctions in the auditory recruitment of visuotopically mapped regions within the superior parietal lobule. An auditory spatial STM task recruited anterior visuotopic maps (IPS2-4, SPL1), but an auditory temporal STM task with equivalent stimuli failed to drive these regions significantly. Behavioral and eye-tracking measures rule out task difficulty and eye movement explanations. Neither auditory task recruited posterior regions IPS0 or IPS1, which appear to be exclusively visual. These findings support the hypothesis of multisensory spatial processing in the anterior, but not posterior, superior parietal lobule and demonstrate that recruitment of these maps depends on auditory task demands. PMID:26656996

  14. Auditory Spatial Coding Flexibly Recruits Anterior, but Not Posterior, Visuotopic Parietal Cortex

    PubMed Central

    Michalka, Samantha W.; Rosen, Maya L.; Kong, Lingqiang; Shinn-Cunningham, Barbara G.; Somers, David C.

    2016-01-01

    Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during auditory spatial attention and working memory tasks, but prior work has not demonstrated that auditory activation occurs within visual spatial maps in parietal cortex. Here, we report both cognitive and anatomical distinctions in the auditory recruitment of visuotopically mapped regions within the superior parietal lobule. An auditory spatial STM task recruited anterior visuotopic maps (IPS2–4, SPL1), but an auditory temporal STM task with equivalent stimuli failed to drive these regions significantly. Behavioral and eye-tracking measures rule out task difficulty and eye movement explanations. Neither auditory task recruited posterior regions IPS0 or IPS1, which appear to be exclusively visual. These findings support the hypothesis of multisensory spatial processing in the anterior, but not posterior, superior parietal lobule and demonstrate that recruitment of these maps depends on auditory task demands. PMID:26656996

  15. The Auditory Verbal Learning Test (Rey AVLT): An Arabic Version

    ERIC Educational Resources Information Center

    Sharoni, Varda; Natur, Nazeh

    2014-01-01

    The goals of this study were to adapt the Rey Auditory Verbal Learning Test (AVLT) into Arabic, to compare recall functioning among age groups (6:0 to 17:11), and to compare gender differences on various memory dimensions (immediate and delayed recall, learning rate, recognition, proactive interferences, and retroactive interferences). This…

  16. Influence of Syllable Structure on L2 Auditory Word Learning

    ERIC Educational Resources Information Center

    Hamada, Megumi; Goya, Hideki

    2015-01-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…

  17. Sensory Substitution: The Spatial Updating of Auditory Scenes “Mimics” the Spatial Updating of Visual Scenes

    PubMed Central

    Pasqualotto, Achille; Esenkaya, Tayfun

    2016-01-01

    Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD). PMID:27148000

  18. Perceptual learning and auditory training in cochlear implant recipients.

    PubMed

    Fu, Qian-Jie; Galvin, John J

    2007-09-01

    Learning electrically stimulated speech patterns can be a new and difficult experience for cochlear implant (CI) recipients. Recent studies have shown that most implant recipients at least partially adapt to these new patterns via passive, daily-listening experiences. Gradually introducing a speech processor parameter (eg, the degree of spectral mismatch) may provide for more complete and less stressful adaptation. Although the implant device restores hearing sensation and the continued use of the implant provides some degree of adaptation, active auditory rehabilitation may be necessary to maximize the benefit of implantation for CI recipients. Currently, there are scant resources for auditory rehabilitation for adult, postlingually deafened CI recipients. We recently developed a computer-assisted speech-training program to provide the means to conduct auditory rehabilitation at home. The training software targets important acoustic contrasts among speech stimuli, provides auditory and visual feedback, and incorporates progressive training techniques, thereby maintaining recipients' interest during the auditory training exercises. Our recent studies demonstrate the effectiveness of targeted auditory training in improving CI recipients' speech and music perception. Provided with an inexpensive and effective auditory training program, CI recipients may find the motivation and momentum to get the most from the implant device. PMID:17709574

  19. Late Maturation of Auditory Perceptual Learning

    ERIC Educational Resources Information Center

    Huyck, Julia Jones; Wright, Beverly A.

    2011-01-01

    Adults can improve their performance on many perceptual tasks with training, but when does the response to training become mature? To investigate this question, we trained 11-year-olds, 14-year-olds and adults on a basic auditory task (temporal-interval discrimination) using a multiple-session training regimen known to be effective for adults. The…

  20. Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities

    ERIC Educational Resources Information Center

    Norrix, Linda W.; Plante, Elena; Vance, Rebecca

    2006-01-01

    Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…

  1. Effects of category learning on auditory perception and cortical maps

    NASA Astrophysics Data System (ADS)

    Guenther, Frank H.

    2002-05-01

    Our ability to discriminate sounds is not uniform throughout acoustic space. One example of auditory space warping, termed the perceptual magnet effect by Kuhl and colleagues, appears to arise from exposure to the phonemes of an infant's native language. We have developed a neural model that accounts for the magnet effect in terms of neural map dynamics in auditory cortex. This model predicts that it should be possible to induce a magnet effect for non-speech stimuli. This prediction was verified by a psychophysical experiment in which subjects underwent categorization training involving non-speech auditory stimuli that were not categorical prior to training. The model further predicts that the magnet effect arises because prototypical vowels have a smaller cortical representation than non-prototypical vowels. This prediction was supported by an fMRI experiment involving prototypical and non-prototypical examples of the vowel /i/. Finally, the model predicts that categorization training with non-speech stimuli should lead to a decreased cortical representation for stimuli near the center of the category. This prediction was supported by an fMRI experiment involving categorization training with non-speech auditory stimuli. These results provide strong support for the model's account of the effects of category learning on auditory perception and auditory cortical maps.

  2. Lack of Generalization of Auditory Learning in Typically Developing Children

    ERIC Educational Resources Information Center

    Halliday, Lorna F.; Taylor, Jenny L.; Millward, Kerri E.; Moore, David R.

    2012-01-01

    Purpose: To understand the components of auditory learning in typically developing children by assessing generalization across stimuli, across modalities (i.e., hearing, vision), and to higher level language tasks. Method: Eighty-six 8- to 10-year-old typically developing children were quasi-randomly assigned to 4 groups. Three of the groups…

  3. Switching auditory attention using spatial and non-spatial features recruits different cortical networks

    PubMed Central

    Larson, Eric; Lee, Adrian KC

    2013-01-01

    Switching attention between different stimuli of interest based on particular task demands is important in many everyday settings. In audition in particular, switching attention between different speakers of interest that are talking concurrently is often necessary for effective communication. Recently, it has been shown by multiple studies that auditory selective attention suppresses the representation of unwanted streams in auditory cortical areas in favor of the target stream of interest. However, the neural processing that guides this selective attention process is not well understood. Here we investigated the cortical mechanisms involved in switching attention based on two different types of auditory features. By combining magneto- and electroencephalography (M-EEG) with an anatomical MRI constraint, we examined the cortical dynamics involved in switching auditory attention based on either spatial or pitch features. We designed a paradigm where listeners were cued in the beginning of each trial to switch or maintain attention halfway through the presentation of concurrent target and masker streams. By allowing listeners time to switch during a gap in the continuous target and masker stimuli, we were able to isolate the mechanisms involved in endogenous, top-down attention switching. Our results show a double dissociation between the involvement of right temporoparietal junction (RTPJ) and the left inferior parietal supramarginal part (LIPSP) in tasks requiring listeners to switch attention based on space and pitch features, respectively, suggesting that switching attention based on these features involves at least partially separate processes or behavioral strategies. PMID:24096028

  4. Using Spatial Manipulation to Examine Interactions between Visual and Auditory Encoding of Pitch and Time

    PubMed Central

    McLachlan, Neil M.; Greco, Loretta J.; Toner, Emily C.; Wilson, Sarah J.

    2010-01-01

    Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal) to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic) notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians. PMID:21833287

  5. Using Spatial Manipulation to Examine Interactions between Visual and Auditory Encoding of Pitch and Time.

    PubMed

    McLachlan, Neil M; Greco, Loretta J; Toner, Emily C; Wilson, Sarah J

    2010-01-01

    Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal) to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic) notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians. PMID:21833287

  6. The Time Course of Neural Changes Underlying Auditory Perceptual Learning

    PubMed Central

    Atienza, Mercedes; Cantero, Jose L.; Dominguez-Marin, Elena

    2002-01-01

    Improvement in perception takes place within the training session and from one session to the next. The present study aims at determining the time course of perceptual learning as revealed by changes in auditory event-related potentials (ERPs) reflecting preattentive processes. Subjects were trained to discriminate two complex auditory patterns in a single session. ERPs were recorded just before and after training, while subjects read a book and ignored stimulation. ERPs showed a negative wave called mismatch negativity (MMN)—which indexes automatic detection of a change in a homogeneous auditory sequence—just after subjects learned to consciously discriminate the two patterns. ERPs were recorded again 12, 24, 36, and 48 h later, just before testing performance on the discrimination task. Additional behavioral and neurophysiological changes were found several hours after the training session: an enhanced P2 at 24 h followed by shorter reaction times, and an enhanced MMN at 36 h. These results indicate that gains in performance on the discrimination of two complex auditory patterns are accompanied by different learning-dependent neurophysiological events evolving within different time frames, supporting the hypothesis that fast and slow neural changes underlie the acquisition of improved perception. PMID:12075002

  7. Rapid cortical dynamics associated with auditory spatial attention gradients

    PubMed Central

    Mock, Jeffrey R.; Seay, Michael J.; Charney, Danielle R.; Holmes, John L.; Golob, Edward J.

    2015-01-01

    Behavioral and EEG studies suggest spatial attention is allocated as a gradient in which processing benefits decrease away from an attended location. Yet the spatiotemporal dynamics of cortical processes that contribute to attentional gradients are unclear. We measured EEG while participants (n = 35) performed an auditory spatial attention task that required a button press to sounds at one target location on either the left or right. Distractor sounds were randomly presented at four non-target locations evenly spaced up to 180° from the target location. Attentional gradients were quantified by regressing ERP amplitudes elicited by distractors against their spatial location relative to the target. Independent component analysis was applied to each subject's scalp channel data, allowing isolation of distinct cortical sources. Results from scalp ERPs showed a tri-phasic response with gradient slope peaks at ~300 ms (frontal, positive), ~430 ms (posterior, negative), and a plateau starting at ~550 ms (frontal, positive). Corresponding to the first slope peak, a positive gradient was found within a central component when attending to both target locations and for two lateral frontal components when contralateral to the target location. Similarly, a central posterior component had a negative gradient that corresponded to the second slope peak regardless of target location. A right posterior component had both an ipsilateral followed by a contralateral gradient. Lateral posterior clusters also had decreases in α and β oscillatory power with a negative slope and contralateral tuning. Only the left posterior component (120–200 ms) corresponded to absolute sound location. The findings indicate a rapid, temporally-organized sequence of gradients thought to reflect interplay between frontal and parietal regions. We conclude these gradients support a target-based saliency map exhibiting aspects of both right-hemisphere dominance and opponent process models. PMID:26082679

  8. Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement.

    PubMed

    Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria

    2016-03-01

    Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer. PMID:26294269

  9. Effect of task-related continuous auditory feedback during learning of tracking motion exercises

    PubMed Central

    2012-01-01

    Background This paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning. Methods We implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller) with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel) yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video), to the audio channel, in order to investigate which information was more relevant to the user. Results Task-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel visuomotor perturbation, whereas

  10. Selective Increase of Auditory Cortico-Striatal Coherence during Auditory-Cued Go/NoGo Discrimination Learning

    PubMed Central

    Schulz, Andreas L.; Woldeit, Marie L.; Gonçalves, Ana I.; Saldeitis, Katja; Ohl, Frank W.

    2016-01-01

    Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcement models, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functional coupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed. PMID:26793085

  11. Feedback delays eliminate auditory-motor learning in speech production.

    PubMed

    Max, Ludo; Maffett, Derek G

    2015-03-30

    Neurologically healthy individuals use sensory feedback to alter future movements by updating internal models of the effector system and environment. For example, when visual feedback about limb movements or auditory feedback about speech movements is experimentally perturbed, the planning of subsequent movements is adjusted - i.e., sensorimotor adaptation occurs. A separate line of studies has demonstrated that experimentally delaying the sensory consequences of limb movements causes the sensory input to be attributed to external sources rather than to one's own actions. Yet similar feedback delays have remarkably little effect on visuo-motor adaptation (although the rate of learning varies, the amount of adaptation is only moderately affected with delays of 100-200ms, and adaptation still occurs even with a delay as long as 5000ms). Thus, limb motor learning remains largely intact even in conditions where error assignment favors external factors. Here, we show a fundamentally different result for sensorimotor control of speech articulation: auditory-motor adaptation to formant-shifted feedback is completely eliminated with delays of 100ms or more. Thus, for speech motor learning, real-time auditory feedback is critical. This novel finding informs theoretical models of human motor control in general and speech motor control in particular, and it has direct implications for the application of motor learning principles in the habilitation and rehabilitation of individuals with various sensorimotor speech disorders. PMID:25676810

  12. Analysis of Mean Learning of Normal Participants on the Rey Auditory-Verbal Learning Test

    ERIC Educational Resources Information Center

    Poreh, Amir

    2005-01-01

    Analysis of the mean performance of 58 groups of normal adults and children on the free-recall trials of the Rey Auditory-Verbal Learning Test shows that the mean auditory-verbal learning of each group is described by the function R1+Sln(t), where R1 is a measure of the mean immediate memory span, S is the slope of the mean logarithmic learning…

  13. Motor-related signals in the auditory system for listening and learning.

    PubMed

    Schneider, David M; Mooney, Richard

    2015-08-01

    In the auditory system, corollary discharge signals are theorized to facilitate normal hearing and the learning of acoustic behaviors, including speech and music. Despite clear evidence of corollary discharge signals in the auditory cortex and their presumed importance for hearing and auditory-guided motor learning, the circuitry and function of corollary discharge signals in the auditory cortex are not well described. In this review, we focus on recent developments in the mouse and songbird that provide insights into the circuitry that transmits corollary discharge signals to the auditory system and the function of these signals in the context of hearing and vocal learning. PMID:25827273

  14. Auditory middle latency response in children with learning difficulties

    PubMed Central

    Frizzo, Ana Claudia Figueiredo; Issac, Myriam Lima; Pontes-Fernandes, Angela Cristina; Menezes, Pedro de Lemos; Funayama, Carolina Araújo Rodrigues

    2012-01-01

    Summary Introduction: This is an objective laboratory assessment of the central auditory systems of children with learning disabilities. Aim: To examine and determine the properties of the components of the Auditory Middle Latency Response in a sample of children with learning disabilities. Methods: This was a prospective, cross-sectional cohort study with quantitative, descriptive, and exploratory outcomes. We included 50 children aged 8–13 years of both genders with and without learning disorders. Those with disorders of known organic, environmental, or genetic causes were excluded. Results and Conclusions: The Na, Pa, and Nb waves were identified in all subjects. The ranges of the latency component values were as follows: Na = 9.8–32.3 ms, Pa = 19.0–51.4 ms, Nb = 30.0–64.3 ms (learning disorders group) and Na = 13.2–29.6 ms, Pa = 21.8–42.8 ms, Nb = 28.4–65.8 ms (healthy group). The values of the Na-Pa amplitude ranged from 0.3 to 6.8 ìV (learning disorders group) or 0.2–3.6 ìV (learning disorders group). Upon analysis, the functional characteristics of the groups were distinct: the left hemisphere Nb latency was longer in the study group than in the control group. Peculiarities of the electrophysiological measures were observed in the children with learning disorders. This study has provided information on the Auditory Middle Latency Response and can serve as a reference for other clinical and experimental studies in children with these disorders. PMID:25991954

  15. Rapid auditory learning of temporal gap detection.

    PubMed

    Mishra, Srikanta K; Panda, Manasa R

    2016-07-01

    The rapid initial phase of training-induced improvement has been shown to reflect a genuine sensory change in perception. Several features of early and rapid learning, such as generalization and stability, remain to be characterized. The present study demonstrated that learning effects from brief training on a temporal gap detection task using spectrally similar narrowband noise markers defining the gap (within-channel task), transfer across ears, however, not across spectrally dissimilar markers (between-channel task). The learning effects associated with brief training on a gap detection task were found to be stable for at least a day. These initial findings have significant implications for characterizing early and rapid learning effects. PMID:27475211

  16. Perception of auditory, visual, and egocentric spatial alignment adapts differently to changes in eye position.

    PubMed

    Cui, Qi N; Razavi, Babak; O'Neill, William E; Paige, Gary D

    2010-02-01

    Vision and audition represent the outside world in spatial synergy that is crucial for guiding natural activities. Input conveying eye-in-head position is needed to maintain spatial congruence because the eyes move in the head while the ears remain head-fixed. Recently, we reported that the human perception of auditory space shifts with changes in eye position. In this study, we examined whether this phenomenon is 1) dependent on a visual fixation reference, 2) selective for frequency bands (high-pass and low-pass noise) related to specific auditory spatial channels, 3) matched by a shift in the perceived straight-ahead (PSA), and 4) accompanied by a spatial shift for visual and/or bimodal (visual and auditory) targets. Subjects were tested in a dark echo-attenuated chamber with their heads fixed facing a cylindrical screen, behind which a mobile speaker/LED presented targets across the frontal field. Subjects fixated alternating reference spots (0, +/-20 degrees ) horizontally or vertically while either localizing targets or indicating PSA using a laser pointer. Results showed that the spatial shift induced by ocular eccentricity is 1) preserved for auditory targets without a visual fixation reference, 2) generalized for all frequency bands, and thus all auditory spatial channels, 3) paralleled by a shift in PSA, and 4) restricted to auditory space. Findings are consistent with a set-point control strategy by which eye position governs multimodal spatial alignment. The phenomenon is robust for auditory space and egocentric perception, and highlights the importance of controlling for eye position in the examination of spatial perception and behavior. PMID:19846626

  17. Disruption of auditory spatial working memory by inactivation of the forebrain archistriatum in barn owls.

    PubMed

    Knudsen, E I; Knudsen, P F

    1996-10-01

    Barn owls not only localize auditory stimuli with great accuracy, they also remember the locations of auditory stimuli and can use this remembered spatial information to guide their flight and strike. Although the mechanisms of sound localization have been studied extensively, the neurobiological basis of auditory spatial memory has not. Here we show that the ability of barn owls to orient their gaze towards and fly to the remembered location of auditory targets is lost during pharmacological inactivation of a small region in the forebrain, the anterior archistriatum. In contrast, archistriatal inactivation has no effect on stimulus-guided responses to auditory targets. The memory-dependent deficit is evident only for acoustic events that occur in the hemifield contralateral to the side that is inactivated. The data demonstrate that in the avian archistriatum, as in the mammalian frontal cortex, there exists a region that is essential for the expression of spatial working memory and that, in the barn owl, this region encodes auditory spatial memory. PMID:8837773

  18. Spatial representations of temporal and spectral sound cues in human auditory cortex.

    PubMed

    Herdener, Marcus; Esposito, Fabrizio; Scheffler, Klaus; Schneider, Peter; Logothetis, Nikos K; Uludag, Kamil; Kayser, Christoph

    2013-01-01

    Natural and behaviorally relevant sounds are characterized by temporal modulations of their waveforms, which carry important cues for sound segmentation and communication. Still, there is little consensus as to how this temporal information is represented in auditory cortex. Here, by using functional magnetic resonance imaging (fMRI) optimized for studying the auditory system, we report the existence of a topographically ordered spatial representation of temporal sound modulation rates in human auditory cortex. We found a topographically organized sensitivity within auditory cortex to sounds with varying modulation rates, with enhanced responses to lower modulation rates (2 and 4 Hz) on lateral parts of Heschl's gyrus (HG) and faster modulation rates (16 and 32 Hz) on medial HG. The representation of temporal modulation rates was distinct from the representation of sound frequencies (tonotopy) that was orientated roughly orthogonal. Moreover, the combination of probabilistic anatomical maps with a previously proposed functional delineation of auditory fields revealed that the distinct maps of temporal and spectral sound features both prevail within two presumed primary auditory fields hA1 and hR. Our results reveal a topographically ordered representation of temporal sound cues in human primary auditory cortex that is complementary to maps of spectral cues. They thereby enhance our understanding of the functional parcellation and organization of auditory cortical processing. PMID:23706955

  19. Spatial Language Learning

    ERIC Educational Resources Information Center

    Fu, Zhengling

    2016-01-01

    Spatial language constitutes part of the basic fabric of language. Although languages may have the same number of terms to cover a set of spatial relations, they do not always do so in the same way. Spatial languages differ across languages quite radically, thus providing a real semantic challenge for second language learners. The essay first…

  20. Interactive coding of visual spatial frequency and auditory amplitude-modulation rate.

    PubMed

    Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Mossbridge, Julia; Suzuki, Satoru

    2012-03-01

    Spatial frequency is a fundamental visual feature coded in primary visual cortex, relevant for perceiving textures, objects, hierarchical structures, and scenes, as well as for directing attention and eye movements. Temporal amplitude-modulation (AM) rate is a fundamental auditory feature coded in primary auditory cortex, relevant for perceiving auditory objects, scenes, and speech. Spatial frequency and temporal AM rate are thus fundamental building blocks of visual and auditory perception. Recent results suggest that crossmodal interactions are commonplace across the primary sensory cortices and that some of the underlying neural associations develop through consistent multisensory experience such as audio-visually perceiving speech, gender, and objects. We demonstrate that people consistently and absolutely (rather than relatively) match specific auditory AM rates to specific visual spatial frequencies. We further demonstrate that this crossmodal mapping allows amplitude-modulated sounds to guide attention to and modulate awareness of specific visual spatial frequencies. Additional results show that the crossmodal association is approximately linear, based on physical spatial frequency, and generalizes to tactile pulses, suggesting that the association develops through multisensory experience during manual exploration of surfaces. PMID:22326023

  1. Pip and Pop: Nonspatial Auditory Signals Improve Spatial Visual Search

    ERIC Educational Resources Information Center

    Van der Burg, Erik; Olivers, Christian N. L.; Bronkhorst, Adelbert W.; Theeuwes, Jan

    2008-01-01

    Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location…

  2. Learning Anatomy Enhances Spatial Ability

    ERIC Educational Resources Information Center

    Vorstenbosch, Marc A. T. M.; Klaassen, Tim P. F. M.; Donders, A. R. T.; Kooloos, Jan G. M.; Bolhuis, Sanneke M.; Laan, Roland F. J. M.

    2013-01-01

    Spatial ability is an important factor in learning anatomy. Students with high scores on a mental rotation test (MRT) systematically score higher on anatomy examinations. This study aims to investigate if learning anatomy also oppositely improves the MRT-score. Five hundred first year students of medicine ("n" = 242, intervention) and…

  3. Auditory Discrimination as a Condition for E-Learning Based Speech Therapy: A Proposal for an Auditory Discrimination Test (ADT) for Adult Dysarthric Speakers

    ERIC Educational Resources Information Center

    Beijer, L. J.; Rietveld, A. C. M.; van Stiphout, A. J. L.

    2011-01-01

    Background: Web based speech training for dysarthric speakers, such as E-learning based Speech Therapy (EST), puts considerable demands on auditory discrimination abilities. Aims: To discuss the development and the evaluation of an auditory discrimination test (ADT) for the assessment of auditory speech discrimination skills in Dutch adult…

  4. Learning Auditory Space: Generalization and Long-Term Effects

    PubMed Central

    Mendonça, Catarina; Campos, Guilherme; Dias, Paulo; Santos, Jorge A.

    2013-01-01

    Background Previous findings have shown that humans can learn to localize with altered auditory space cues. Here we analyze such learning processes and their effects up to one month on both localization accuracy and sound externalization. Subjects were trained and retested, focusing on the effects of stimulus type in learning, stimulus type in localization, stimulus position, previous experience, externalization levels, and time. Method We trained listeners in azimuth and elevation discrimination in two experiments. Half participated in the azimuth experiment first and half in the elevation first. In each experiment, half were trained in speech sounds and half in white noise. Retests were performed at several time intervals: just after training and one hour, one day, one week and one month later. In a control condition, we tested the effect of systematic retesting over time with post-tests only after training and either one day, one week, or one month later. Results With training all participants lowered their localization errors. This benefit was still present one month after training. Participants were more accurate in the second training phase, revealing an effect of previous experience on a different task. Training with white noise led to better results than training with speech sounds. Moreover, the training benefit generalized to untrained stimulus-position pairs. Throughout the post-tests externalization levels increased. In the control condition the long-term localization improvement was not lower without additional contact with the trained sounds, but externalization levels were lower. Conclusion Our findings suggest that humans adapt easily to altered auditory space cues and that such adaptation spreads to untrained positions and sound types. We propose that such learning depends on all available cues, but each cue type might be learned and retrieved differently. The process of localization learning is global, not limited to stimulus-position pairs, and

  5. Task-dependent calibration of auditory spatial perception through environmental visual observation.

    PubMed

    Tonelli, Alessia; Brayda, Luca; Gori, Monica

    2015-01-01

    Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system. PMID:26082692

  6. Spatial pattern of intra-laminar connectivity in supragranular mouse auditory cortex

    PubMed Central

    Watkins, Paul V.; Kao, Joseph P. Y.; Kanold, Patrick O.

    2014-01-01

    Neuronal responses and topographic organization of feature selectivity in the cerebral cortex are shaped by ascending inputs and by intracortical connectivity. The mammalian primary auditory cortex has a tonotopic arrangement at large spatial scales (greater than 300 microns). This large-scale architecture breaks down in supragranular layers at smaller scales (around 300 microns), where nearby frequency and sound level tuning properties can be quite heterogeneous. Since layer 4 has a more homogeneous architecture, the heterogeneity in supragranular layers might be caused by heterogeneous ascending input or via heterogeneous intralaminar connections. Here we measure the functional 2-dimensional spatial connectivity pattern of the supragranular auditory cortex on micro-column scales. In general connection probability decreases with radial distance from each neuron, but the decrease is steeper in the isofrequency axis leading to an anisotropic distribution of connection probability with respect to the tonotopic axis. In addition to this radial decrease in connection probability we find a patchy organization of inhibitory and excitatory synaptic inputs that is also anisotropic with respect to the tonotopic axis. These periodicities are at spatial scales of ~100 and ~300 μm. While these spatial periodicities show anisotropy in auditory cortex, they are isotropic in visual cortex, indicating region specific differences in intralaminar connections. Together our results show that layer 2/3 neurons in auditory cortex show specific spatial intralaminar connectivity despite the overtly heterogeneous tuning properties. PMID:24653677

  7. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    NASA Astrophysics Data System (ADS)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present

  8. Differential Effects of Music and Video Gaming During Breaks on Auditory and Visual Learning.

    PubMed

    Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A

    2015-11-01

    The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning. PMID:26448497

  9. Neuromagnetic fields reveal cortical plasticity when learning an auditory discrimination task.

    PubMed

    Cansino, S; Williamson, S J

    1997-08-01

    Auditory evoked neuromagnetic fields of the primary and association auditory cortices were recorded while subjects learned to discriminate small differences in frequency and intensity between two consecutive tones. When discrimination was no better than chance, evoked field patterns across the scalp manifested no significant differences between correct and incorrect responses. However, when performance was correct on at least 75% of the trials, the spatial pattern of magnetic field differed significantly between correct and incorrect responses during the first 70 ms following the onset of the second tone. In this respect, the magnetic field pattern predicted when the subject would make an incorrect judgment more than 100 ms prior to indicating the judgment by a button press. One subject improved discrimination for much smaller differences between stimuli after 200 h of training. Evidence of cortical plasticity with improved discrimination is provided by an accompanying decrease of the relative magnetic field amplitude of the 100 ms response components in the primary and association auditory cortices. PMID:9295193

  10. Relative contributions of visual and auditory spatial representations to tactile localization.

    PubMed

    Noel, Jean-Paul; Wallace, Mark

    2016-02-01

    Spatial localization of touch is critically dependent upon coordinate transformation between different reference frames, which must ultimately allow for alignment between somatotopic and external representations of space. Although prior work has shown an important role for cues such as body posture in influencing the spatial localization of touch, the relative contributions of the different sensory systems to this process are unknown. In the current study, we had participants perform a tactile temporal order judgment (TOJ) under different body postures and conditions of sensory deprivation. Specifically, participants performed non-speeded judgments about the order of two tactile stimuli presented in rapid succession on their ankles during conditions in which their legs were either uncrossed or crossed (and thus bringing somatotopic and external reference frames into conflict). These judgments were made in the absence of 1) visual, 2) auditory, or 3) combined audio-visual spatial information by blindfolding and/or placing participants in an anechoic chamber. As expected, results revealed that tactile temporal acuity was poorer under crossed than uncrossed leg postures. Intriguingly, results also revealed that auditory and audio-visual deprivation exacerbated the difference in tactile temporal acuity between uncrossed to crossed leg postures, an effect not seen for visual-only deprivation. Furthermore, the effects under combined audio-visual deprivation were greater than those seen for auditory deprivation. Collectively, these results indicate that mechanisms governing the alignment between somatotopic and external reference frames extend beyond those imposed by body posture to include spatial features conveyed by the auditory and visual modalities - with a heavier weighting of auditory than visual spatial information. Thus, sensory modalities conveying exteroceptive spatial information contribute to judgments regarding the localization of touch. PMID:26768124

  11. Auditory experience-dependent cortical circuit shaping for memory formation in bird song learning

    PubMed Central

    Yanagihara, Shin; Yazaki-Sugiyama, Yoko

    2016-01-01

    As in human speech acquisition, songbird vocal learning depends on early auditory experience. During development, juvenile songbirds listen to and form auditory memories of adult tutor songs, which they use to shape their own vocalizations in later sensorimotor learning. The higher-level auditory cortex, called the caudomedial nidopallium (NCM), is a potential storage site for tutor song memory, but no direct electrophysiological evidence of tutor song memory has been found. Here, we identify the neuronal substrate for tutor song memory by recording single-neuron activity in the NCM of behaving juvenile zebra finches. After tutor song experience, a small subset of NCM neurons exhibit highly selective auditory responses to the tutor song. Moreover, blockade of GABAergic inhibition, and sleep decrease their selectivity. Taken together, these results suggest that experience-dependent recruitment of GABA-mediated inhibition shapes auditory cortical circuits, leading to sparse representation of tutor song memory in auditory cortical neurons. PMID:27327620

  12. Auditory experience-dependent cortical circuit shaping for memory formation in bird song learning.

    PubMed

    Yanagihara, Shin; Yazaki-Sugiyama, Yoko

    2016-01-01

    As in human speech acquisition, songbird vocal learning depends on early auditory experience. During development, juvenile songbirds listen to and form auditory memories of adult tutor songs, which they use to shape their own vocalizations in later sensorimotor learning. The higher-level auditory cortex, called the caudomedial nidopallium (NCM), is a potential storage site for tutor song memory, but no direct electrophysiological evidence of tutor song memory has been found. Here, we identify the neuronal substrate for tutor song memory by recording single-neuron activity in the NCM of behaving juvenile zebra finches. After tutor song experience, a small subset of NCM neurons exhibit highly selective auditory responses to the tutor song. Moreover, blockade of GABAergic inhibition, and sleep decrease their selectivity. Taken together, these results suggest that experience-dependent recruitment of GABA-mediated inhibition shapes auditory cortical circuits, leading to sparse representation of tutor song memory in auditory cortical neurons. PMID:27327620

  13. The Use of Spatialized Speech in Auditory Interfaces for Computer Users Who Are Visually Impaired

    ERIC Educational Resources Information Center

    Sodnik, Jaka; Jakus, Grega; Tomazic, Saso

    2012-01-01

    Introduction: This article reports on a study that explored the benefits and drawbacks of using spatially positioned synthesized speech in auditory interfaces for computer users who are visually impaired (that is, are blind or have low vision). The study was a practical application of such systems--an enhanced word processing application compared…

  14. Spatial and temporal relationships of electrocorticographic alpha and gamma activity during auditory processing.

    PubMed

    Potes, Cristhian; Brunner, Peter; Gunduz, Aysegul; Knight, Robert T; Schalk, Gerwin

    2014-08-15

    Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8-12 Hz) and high gamma (70-110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15,107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15,107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p<1e(-8)). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these

  15. Auditory artificial grammar learning in macaque and marmoset monkeys.

    PubMed

    Wilson, Benjamin; Slater, Heather; Kikuchi, Yukiko; Milne, Alice E; Marslen-Wilson, William D; Smith, Kenny; Petkov, Christopher I

    2013-11-27

    Artificial grammars (AG) are designed to emulate aspects of the structure of language, and AG learning (AGL) paradigms can be used to study the extent of nonhuman animals' structure-learning capabilities. However, different AG structures have been used with nonhuman animals and are difficult to compare across studies and species. We developed a simple quantitative parameter space, which we used to summarize previous nonhuman animal AGL results. This was used to highlight an under-studied AG with a forward-branching structure, designed to model certain aspects of the nondeterministic nature of word transitions in natural language and animal song. We tested whether two monkey species could learn aspects of this auditory AG. After habituating the monkeys to the AG, analysis of video recordings showed that common marmosets (New World monkeys) differentiated between well formed, correct testing sequences and those violating the AG structure based primarily on simple learning strategies. By comparison, Rhesus macaques (Old World monkeys) showed evidence for deeper levels of AGL. A novel eye-tracking approach confirmed this result in the macaques and demonstrated evidence for more complex AGL. This study provides evidence for a previously unknown level of AGL complexity in Old World monkeys that seems less evident in New World monkeys, which are more distant evolutionary relatives to humans. The findings allow for the development of both marmosets and macaques as neurobiological model systems to study different aspects of AGL at the neuronal level. PMID:24285889

  16. Binding of Verbal and Spatial Features in Auditory Working Memory

    ERIC Educational Resources Information Center

    Maybery, Murray T.; Clissa, Peter J.; Parmentier, Fabrice B. R.; Leung, Doris; Harsa, Grefin; Fox, Allison M.; Jones, Dylan M.

    2009-01-01

    The present study investigated the binding of verbal identity and spatial location in the retention of sequences of spatially distributed acoustic stimuli. Study stimuli varying in verbal content and spatial location (e.g. V[subscript 1]S[subscript 1], V[subscript 2]S[subscript 2], V[subscript 3]S[subscript 3], V[subscript 4]S[subscript 4]) were…

  17. Independent impacts of age and hearing loss on spatial release in a complex auditory environment

    PubMed Central

    Gallun, Frederick J.; Diedesch, Anna C.; Kampel, Sean D.; Jakien, Kasey M.

    2013-01-01

    Listeners in complex auditory environments can benefit from the ability to use a variety of spatial and spectrotemporal cues for sound source segregation. Probing these abilities is an essential part of gaining a more complete understanding of why listeners differ in navigating the auditory environment. Two fundamental processes that can impact the auditory systems of individual listeners are aging and hearing loss. One difficulty with uncovering the independent effects of age and hearing loss on spatial release is the commonly observed phenomenon of age-related hearing loss. In order to reveal the effects of aging on spatial hearing, it is essential to develop testing methods that reduce the influence of hearing loss on the outcomes. The statistical power needed for such testing generally requires a larger number of participants than can easily be tested using traditional behavioral methods. This work describes the development and validation of a rapid method by which listeners can be categorized in terms of their ability to use spatial and spectrotemporal cues to separate competing speech streams. Results show that when age and audibility are not covarying, age alone can be shown to substantially reduce spatial release from masking. These data support the hypothesis that aging, independent of an individual's hearing threshold, can result in changes in the cortical and/or subcortical structures essential for spatial hearing. PMID:24391535

  18. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.

    PubMed

    Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui

    2015-09-01

    Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness < 2 years) and 40 age- and gender-matched hearing controls underwent functional magnetic resonance imaging during a visuo-spatial delayed recognition task that consisted of encoding, maintenance and recognition stages. The early deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were

  19. Changes in Auditory Frequency Guide Visual-Spatial Attention

    ERIC Educational Resources Information Center

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2011-01-01

    How do the characteristics of sounds influence the allocation of visual-spatial attention? Natural sounds typically change in frequency. Here we demonstrate that the direction of frequency change guides visual-spatial attention more strongly than the average or ending frequency, and provide evidence suggesting that this cross-modal effect may be…

  20. Selective Attention Modulates Human Auditory Brainstem Responses: Relative Contributions of Frequency and Spatial Cues

    PubMed Central

    Lehmann, Alexandre; Schönwiesner, Marc

    2014-01-01

    Selective attention is the mechanism that allows focusing one’s attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues. PMID:24454869

  1. Effects of spatially correlated acoustic-tactile information on judgments of auditory circular direction

    NASA Astrophysics Data System (ADS)

    Cohen, Annabel J.; Lamothe, M. J. Reina; Toms, Ian D.; Fleming, Richard A. G.

    2002-05-01

    Cohen, Lamothe, Fleming, MacIsaac, and Lamoureux [J. Acoust. Soc. Am. 109, 2460 (2001)] reported that proximity governed circular direction judgments (clockwise/counterclockwise) of two successive tones emanating from all pairs of 12 speakers located at 30-degree intervals around a listeners' head (cranium). Many listeners appeared to experience systematic front-back confusion. Diametrically opposed locations (180-degrees-theoretically ambiguous direction) produced a direction bias pattern resembling Deutsch's tritone paradox [Deutsch, Kuyper, and Fisher, Music Percept. 5, 7992 (1987)]. In Experiment 1 of the present study, the circular direction task was conducted in the tactile domain using 12 circumcranial points of vibration. For all 5 participants, proximity governed direction (without front-back confusion) and a simple clockwise bias was shown for 180-degree pairs. Experiment 2 tested 9 new participants in one unimodal auditory condition and two bimodal auditory-tactile conditions (spatially-correlated/spatially-uncorrelated). Correlated auditory-tactile information eliminated front-back confusion for 8 participants and replaced the ``paradoxical'' bias for 180-degree pairs with the clockwise bias. Thus, spatially correlated audio-tactile location information improves the veridical representation of 360-degree acoustic space, and modality-specific principles are implicated by the unique circular direction bias patterns for 180-degree pairs in the separate auditory and tactile modalities. [Work supported by NSERC.

  2. Auditory Model: Effects on Learning under Blocked and Random Practice Schedules

    ERIC Educational Resources Information Center

    Han, Dong-Wook; Shea, Charles H.

    2008-01-01

    An experiment was conducted to determine the impact of an auditory model on blocked, random, and mixed practice schedules of three five-segment timing sequences (relative time constant). We were interested in whether or not the auditory model differentially affected the learning of relative and absolute timing under blocked and random practice.…

  3. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. PMID:23261663

  4. Hierarchical and serial processing in the spatial auditory cortical pathway is degraded by natural aging

    PubMed Central

    Juarez-Salinas, Dina L.; Engle, James R.; Navarro, Xochi O.; Recanzone, Gregg H.

    2010-01-01

    The compromised abilities to localize sounds and to understand speech are two hallmark deficits in aged individuals. The auditory cortex is necessary for these processes, yet we know little about how normal aging affects these early cortical fields. In this study, we recorded the spatial tuning of single neurons in primary (area A1) and secondary (area CL) auditory cortical areas in young and aged alert rhesus macaques. We found that the neurons of aged animals had greater spontaneous and driven activity, and broader spatial tuning compared to those of younger animals. Importantly, spatial tuning was not sharpened between A1 and CL in aged monkeys as it is in younger monkeys. This implies that a major effect of normal aging is a degradation of the hierarchical processing between serially connected cortical areas, which could be a key contributing mechanism of the general cognitive decline that is commonly observed in normal aging. PMID:21048138

  5. An exploration of spatial auditory BCI paradigms with different sounds: music notes versus beeps.

    PubMed

    Huang, Minqiang; Daly, Ian; Jin, Jing; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2016-06-01

    Visual brain-computer interfaces (BCIs) are not suitable for people who cannot reliably maintain their eye gaze. Considering that this group usually maintains audition, an auditory based BCI may be a good choice for them. In this paper, we explore two auditory patterns: (1) a pattern utilizing symmetrical spatial cues with multiple frequency beeps [called the high low medium (HLM) pattern], and (2) a pattern utilizing non-symmetrical spatial cues with six tones derived from the diatonic scale [called the diatonic scale (DS) pattern]. These two patterns are compared to each other in terms of accuracy to determine which auditory pattern is better. The HLM pattern uses three different frequency beeps and has a symmetrical spatial distribution. The DS pattern uses six spoken stimuli, which are six notes solmizated as "do", "re", "mi", "fa", "sol" and "la", and derived from the diatonic scale. These six sounds are distributed to six, spatially distributed, speakers. Thus, we compare a BCI paradigm using beeps with another BCI paradigm using tones on the diatonic scale, when the stimuli are spatially distributed. Although no significant differences are found between the ERPs, the HLM pattern performs better than the DS pattern: the online accuracy achieved with the HLM pattern is significantly higher than that achieved with the DS pattern (p = 0.0028). PMID:27275376

  6. Eye movement preparation causes spatially-specific modulation of auditory processing: New evidence from event-related brain potentials

    PubMed Central

    Gherri, Elena; Driver, Jon; Eimer, Martin

    2009-01-01

    To investigate whether saccade preparation can modulate processing of auditory stimuli in a spatially-specific fashion, ERPs were recorded for a Saccade task, in which the direction of a prepared saccade was cued, prior to an imperative auditory stimulus indicating whether to execute or withhold that saccade. For comparison, we also ran a conventional Covert Attention task, where the same cue now indicated the direction for a covert endogenous attentional shift prior to an auditory target-nontarget discrimination. Lateralised components previously observed during cued shifts of attention (ADAN, LDAP) did not differ significantly across tasks, indicating commonalities between auditory spatial attention and oculomotor control. Moreover, in both tasks, spatially-specific modulation of auditory processing was subsequently found, with enhanced negativity for lateral auditory nontarget stimuli at cued versus uncued locations. This modulation started earlier and was more pronounced for the Covert Attention task, but was also reliably present in the Saccade task, demonstrating that the effects of covert saccade preparation on auditory processing can be similar to effects of endogenous covert attentional orienting, albeit smaller. These findings provide new evidence for similarities but also some differences between oculomotor preparation and shifts of endogenous spatial attention. They also show that saccade preparation can affect not just vision, but also sensory processing of auditory events. PMID:18614157

  7. Speech motor learning changes the neural response to both auditory and somatosensory signals

    PubMed Central

    Ito, Takayuki; Coppola, Joshua H.; Ostry, David J.

    2016-01-01

    In the present paper, we present evidence for the idea that speech motor learning is accompanied by changes to the neural coding of both auditory and somatosensory stimuli. Participants in our experiments undergo adaptation to altered auditory feedback, an experimental model of speech motor learning which like visuo-motor adaptation in limb movement, requires that participants change their speech movements and associated somatosensory inputs to correct for systematic real-time changes to auditory feedback. We measure the sensory effects of adaptation by examining changes to auditory and somatosensory event-related responses. We find that adaptation results in progressive changes to speech acoustical outputs that serve to correct for the perturbation. We also observe changes in both auditory and somatosensory event-related responses that are correlated with the magnitude of adaptation. These results indicate that sensory change occurs in conjunction with the processes involved in speech motor adaptation. PMID:27181603

  8. Spatial learning while navigating with severely degraded viewing: The role of attention and mobility monitoring

    PubMed Central

    Rand, Kristina M.; Creem-Regehr, Sarah H.; Thompson, William B.

    2015-01-01

    The ability to navigate without getting lost is an important aspect of quality of life. In five studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low-vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2 participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared to degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared to degraded vision. With degraded vision, auditory task performance was better when guided compared to unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing. PMID:25706766

  9. Spatial learning while navigating with severely degraded viewing: The role of attention and mobility monitoring.

    PubMed

    Rand, Kristina M; Creem-Regehr, Sarah H; Thompson, William B

    2015-06-01

    The ability to navigate without getting lost is an important aspect of quality of life. In 5 studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2, participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared with degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared with degraded vision. With degraded vision, auditory task performance was better when guided compared with unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing. PMID:25706766

  10. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    NASA Astrophysics Data System (ADS)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  11. Spatial auditory regularity encoding and prediction: Human middle-latency and long-latency auditory evoked potentials.

    PubMed

    Cornella, M; Bendixen, A; Grimm, S; Leung, S; Schröger, E; Escera, C

    2015-11-11

    By encoding acoustic regularities present in the environment, the human brain can generate predictions of what is likely to occur next. Recent studies suggest that deviations from encoded regularities are detected within 10-50ms after stimulus onset, as indicated by electrophysiological effects in the middle latency response (MLR) range. This is upstream of previously known long-latency (LLR) signatures of deviance detection such as the mismatch negativity (MMN) component. In the present study, we created predictable and unpredictable contexts to investigate MLR and LLR signatures of the encoding of spatial auditory regularities and the generation of predictions from these regularities. Chirps were monaurally delivered in an either regular (predictable: left-right-left-right) or a random (unpredictable left/right alternation or repetition) manner. Occasional stimulus omissions occurred in both types of sequences. Results showed that the Na component (peaking at 34ms after stimulus onset) was attenuated for regular relative to random chirps, albeit no differences were observed for stimulus omission responses in the same latency range. In the LLR range, larger chirp-and omission-evoked responses were elicited for the regular than for the random condition, and predictability effects were more prominent over the right hemisphere. We discuss our findings in the framework of a hierarchical organization of spatial regularity encoding. This article is part of a Special Issue entitled SI: Prediction and Attention. PMID:25912975

  12. Auditory processing deficits among language-learning disordered children and adults

    NASA Astrophysics Data System (ADS)

    Wayland, Ratree; Lombardino, Linda

    2003-10-01

    It has been estimated that approximately 5%-9% of school-aged children in the United States are diagnosed with some kind of learning disorders. Moreover, previous research has established that many of these children exhibited perceptual deficits in response to auditory stimuli, suggesting that an auditory perceptual deficit may underlie their learning disabilities. The goal of this research is to examine the ability to auditorily process speech and nonspeech stimuli among language-learning disabled (LLD) children and adults. The two questions that will be addressed in this study are: (a) Are there subtypes of LLD children/adults based on their auditory processing deficit, and (b) Is there any relationship between types of auditory processing deficits and types of language deficits as measured by a battery of psychoeducational tests.

  13. Spatial and temporal auditory processing deficits following right hemisphere infarction. A psychophysical study.

    PubMed

    Griffiths, T D; Rees, A; Witton, C; Cross, P M; Shakir, R A; Green, G G

    1997-05-01

    Higher auditory function in a patient was investigated following a right hemisphere infarction between the middle and posterior cerebral artery territories involving the insula. The patient complained of lack of musical appreciation and a battery of tests confirmed a dissociated receptive musical deficit in the presence of normal appreciation of environmental sounds and speech. The ability to detect continuous changes in sound frequency in the form of sinusoidal frequency modulation was preserved. There was, however, a deficit in the analysis of rapid temporal sequences of notes which could underlie his musical deficit. This case provides further evidence for the existence of amusia as a distinct form of auditory agnosia, but does not support the hypothesis that bilateral lesions are required to produce such a deficit. Unexpectedly, the patient was also found to have a deficit in the perception of apparent sound-source movement. We suggest that this deficit is analogous to the visual phenomenon of akinetopsia, and is in accord with PET work suggesting involvement of areas outside primary auditory cortex in sound movement perception. A possible common deficit in auditory temporal and spatial 'scene analysis' is discussed. PMID:9183249

  14. Modulation of human auditory spatial scene analysis by transcranial direct current stimulation.

    PubMed

    Lewald, Jörg

    2016-04-01

    Localizing and selectively attending to the source of a sound of interest in a complex auditory environment is an important capacity of the human auditory system. The underlying neural mechanisms have, however, still not been clarified in detail. This issue was addressed by using bilateral bipolar-balanced transcranial direct current stimulation (tDCS) in combination with a task demanding free-field sound localization in the presence of multiple sound sources, thus providing a realistic simulation of the so-called "cocktail-party" situation. With left-anode/right-cathode, but not with right-anode/left-cathode, montage of bilateral electrodes, tDCS over superior temporal gyrus, including planum temporale and auditory cortices, was found to improve the accuracy of target localization in left hemispace. No effects were found for tDCS over inferior parietal lobule or with off-target active stimulation over somatosensory-motor cortex that was used to control for non-specific effects. Also, the absolute error in localization remained unaffected by tDCS, thus suggesting that general response precision was not modulated by brain polarization. This finding can be explained in the framework of a model assuming that brain polarization modulated the suppression of irrelevant sound sources, thus resulting in more effective spatial separation of the target from the interfering sound in the complex auditory scene. PMID:26825012

  15. The Spatially Competent Child with Learning Disabilities (SCLD): The Evidence from Research.

    ERIC Educational Resources Information Center

    Bannatyne, Alexander

    Research is reviewed in support of the author's hypothesis that the majority (60 to 80 percent) of learning disabled children are not brain damaged but have above average spatial ability and major deficits in auditory-vocal memory processing which are genetic in nature. Research is reported to support other aspects of his hypothesis such as the…

  16. Investigating Verbal and Visual Auditory Learning After Conformal Radiation Therapy for Childhood Ependymoma

    SciTech Connect

    Di Pinto, Marcos; Conklin, Heather M.; Li Chenghong; Xiong Xiaoping; Merchant, Thomas E.

    2010-07-15

    Purpose: The primary objective of this study was to determine whether children with localized ependymoma experience a decline in verbal or visual-auditory learning after conformal radiation therapy (CRT). The secondary objective was to investigate the impact of age and select clinical factors on learning before and after treatment. Methods and Materials: Learning in a sample of 71 patients with localized ependymoma was assessed with the California Verbal Learning Test (CVLT-C) and the Visual-Auditory Learning Test (VAL). Learning measures were administered before CRT, at 6 months, and then yearly for a total of 5 years. Results: There was no significant decline on measures of verbal or visual-auditory learning after CRT; however, younger age, more surgeries, and cerebrospinal fluid shunting did predict lower scores at baseline. There were significant longitudinal effects (improved learning scores after treatment) among older children on the CVLT-C and children that did not receive pre-CRT chemotherapy on the VAL. Conclusion: There was no evidence of global decline in learning after CRT in children with localized ependymoma. Several important implications from the findings include the following: (1) identification of and differentiation among variables with transient vs. long-term effects on learning, (2) demonstration that children treated with chemotherapy before CRT had greater risk of adverse visual-auditory learning performance, and (3) establishment of baseline and serial assessment as critical in ascertaining necessary sensitivity and specificity for the detection of modest effects.

  17. Auditory spatial discrimination by barn owls in simulated echoic conditions

    NASA Astrophysics Data System (ADS)

    Spitzer, Matthew W.; Bala, Avinash D. S.; Takahashi, Terry T.

    2003-03-01

    In humans, directional hearing in reverberant conditions is characterized by a ``precedence effect,'' whereby directional information conveyed by leading sounds dominates perceived location, and listeners are relatively insensitive to directional information conveyed by lagging sounds. Behavioral studies provide evidence of precedence phenomena in a wide range of species. The present study employs a discrimination paradigm, based on habituation and recovery of the pupillary dilation response, to provide quantitative measures of precedence phenomena in the barn owl. As in humans, the owl's ability to discriminate changes in the location of lagging sources is impaired relative to that for single sources. Spatial discrimination of lead sources is also impaired, but to a lesser extent than discrimination of lagging sources. Results of a control experiment indicate that sensitivity to monaural cues cannot account for discrimination of lag source location. Thus, impairment of discrimination ability in the two-source conditions most likely reflects a reduction in sensitivity to binaural directional information. These results demonstrate a similarity of precedence effect phenomena in barn owls and humans, and provide a basis for quantitative comparison with neuronal data from the same species.

  18. Auditory spatial resolution in horizontal, vertical, and diagonal planes

    NASA Astrophysics Data System (ADS)

    Grantham, D. Wesley; Hornsby, Benjamin W. Y.; Erpenbeck, Eric A.

    2003-08-01

    Minimum audible angle (MAA) and minimum audible movement angle (MAMA) thresholds were measured for stimuli in horizontal, vertical, and diagonal (60°) planes. A pseudovirtual technique was employed in which signals were recorded through KEMAR's ears and played back to subjects through insert earphones. Thresholds were obtained for wideband, high-pass, and low-pass noises. Only 6 of 20 subjects obtained wideband vertical-plane MAAs less than 10°, and only these 6 subjects were retained for the complete study. For all three filter conditions thresholds were lowest in the horizontal plane, slightly (but significantly) higher in the diagonal plane, and highest for the vertical plane. These results were similar in magnitude and pattern to those reported by Perrott and Saberi [J. Acoust. Soc. Am. 87, 1728-1731 (1990)] and Saberi and Perrott [J. Acoust. Soc. Am. 88, 2639-2644 (1990)], except that these investigators generally found that thresholds for diagonal planes were as good as those for the horizontal plane. The present results are consistent with the hypothesis that diagonal-plane performance is based on independent contributions from a horizontal-plane system (sensitive to interaural differences) and a vertical-plane system (sensitive to pinna-based spectral changes). Measurements of the stimuli recorded through KEMAR indicated that sources presented from diagonal planes can produce larger interaural level differences (ILDs) in certain frequency regions than would be expected based on the horizontal projection of the trajectory. Such frequency-specific ILD cues may underlie the very good performance reported in previous studies for diagonal spatial resolution. Subjects in the present study could apparently not take advantage of these cues in the diagonal-plane condition, possibly because they did not externalize the images to their appropriate positions in space or possibly because of the absence of a patterned visual field.

  19. Polarity-specific transcranial direct current stimulation disrupts auditory pitch learning.

    PubMed

    Matsushita, Reiko; Andoh, Jamila; Zatorre, Robert J

    2015-01-01

    Transcranial direct current stimulation (tDCS) is attracting increasing interest because of its potential for therapeutic use. While its effects have been investigated mainly with motor and visual tasks, less is known in the auditory domain. Past tDCS studies with auditory tasks demonstrated various behavioral outcomes, possibly due to differences in stimulation parameters, task-induced brain activity, or task measurements used in each study. Further research, using well-validated tasks is therefore required for clarification of behavioral effects of tDCS on the auditory system. Here, we took advantage of findings from a prior functional magnetic resonance imaging study, which demonstrated that the right auditory cortex is modulated during fine-grained pitch learning of microtonal melodic patterns. Targeting the right auditory cortex with tDCS using this same task thus allowed us to test the hypothesis that this region is causally involved in pitch learning. Participants in the current study were trained for 3 days while we measured pitch discrimination thresholds using microtonal melodies on each day using a psychophysical staircase procedure. We administered anodal, cathodal, or sham tDCS to three groups of participants over the right auditory cortex on the second day of training during performance of the task. Both the sham and the cathodal groups showed the expected significant learning effect (decreased pitch threshold) over the 3 days of training; in contrast we observed a blocking effect of anodal tDCS on auditory pitch learning, such that this group showed no significant change in thresholds over the 3 days. The results support a causal role for the right auditory cortex in pitch discrimination learning. PMID:26041982

  20. Polarity-specific transcranial direct current stimulation disrupts auditory pitch learning

    PubMed Central

    Matsushita, Reiko; Andoh, Jamila; Zatorre, Robert J.

    2015-01-01

    Transcranial direct current stimulation (tDCS) is attracting increasing interest because of its potential for therapeutic use. While its effects have been investigated mainly with motor and visual tasks, less is known in the auditory domain. Past tDCS studies with auditory tasks demonstrated various behavioral outcomes, possibly due to differences in stimulation parameters, task-induced brain activity, or task measurements used in each study. Further research, using well-validated tasks is therefore required for clarification of behavioral effects of tDCS on the auditory system. Here, we took advantage of findings from a prior functional magnetic resonance imaging study, which demonstrated that the right auditory cortex is modulated during fine-grained pitch learning of microtonal melodic patterns. Targeting the right auditory cortex with tDCS using this same task thus allowed us to test the hypothesis that this region is causally involved in pitch learning. Participants in the current study were trained for 3 days while we measured pitch discrimination thresholds using microtonal melodies on each day using a psychophysical staircase procedure. We administered anodal, cathodal, or sham tDCS to three groups of participants over the right auditory cortex on the second day of training during performance of the task. Both the sham and the cathodal groups showed the expected significant learning effect (decreased pitch threshold) over the 3 days of training; in contrast we observed a blocking effect of anodal tDCS on auditory pitch learning, such that this group showed no significant change in thresholds over the 3 days. The results support a causal role for the right auditory cortex in pitch discrimination learning. PMID:26041982

  1. A blueprint for vocal learning: auditory predispositions from brains to genomes.

    PubMed

    Wheatcroft, David; Qvarnström, Anna

    2015-08-01

    Memorizing and producing complex strings of sound are requirements for spoken human language. We share these behaviours with likely more than 4000 species of songbirds, making birds our primary model for studying the cognitive basis of vocal learning and, more generally, an important model for how memories are encoded in the brain. In songbirds, as in humans, the sounds that a juvenile learns later in life depend on auditory memories formed early in development. Experiments on a wide variety of songbird species suggest that the formation and lability of these auditory memories, in turn, depend on auditory predispositions that stimulate learning when a juvenile hears relevant, species-typical sounds. We review evidence that variation in key features of these auditory predispositions are determined by variation in genes underlying the development of the auditory system. We argue that increased investigation of the neuronal basis of auditory predispositions expressed early in life in combination with modern comparative genomic approaches may provide insights into the evolution of vocal learning. PMID:26246333

  2. A blueprint for vocal learning: auditory predispositions from brains to genomes

    PubMed Central

    Wheatcroft, David; Qvarnström, Anna

    2015-01-01

    Memorizing and producing complex strings of sound are requirements for spoken human language. We share these behaviours with likely more than 4000 species of songbirds, making birds our primary model for studying the cognitive basis of vocal learning and, more generally, an important model for how memories are encoded in the brain. In songbirds, as in humans, the sounds that a juvenile learns later in life depend on auditory memories formed early in development. Experiments on a wide variety of songbird species suggest that the formation and lability of these auditory memories, in turn, depend on auditory predispositions that stimulate learning when a juvenile hears relevant, species-typical sounds. We review evidence that variation in key features of these auditory predispositions are determined by variation in genes underlying the development of the auditory system. We argue that increased investigation of the neuronal basis of auditory predispositions expressed early in life in combination with modern comparative genomic approaches may provide insights into the evolution of vocal learning. PMID:26246333

  3. Learning to produce speech with an altered vocal tract: The role of auditory feedback

    NASA Astrophysics Data System (ADS)

    Jones, Jeffery A.; Munhall, K. G.

    2003-01-01

    Modifying the vocal tract alters a speaker's previously learned acoustic-articulatory relationship. This study investigated the contribution of auditory feedback to the process of adapting to vocal-tract modifications. Subjects said the word /tas/ while wearing a dental prosthesis that extended the length of their maxillary incisor teeth. The prosthesis affected /s/ productions and the subjects were asked to learn to produce ``normal'' /s/'s. They alternately received normal auditory feedback and noise that masked their natural feedback during productions. Acoustic analysis of the speakers' /s/ productions showed that the distribution of energy across the spectra moved toward that of normal, unperturbed production with increased experience with the prosthesis. However, the acoustic analysis did not show any significant differences in learning dependent on auditory feedback. By contrast, when naive listeners were asked to rate the quality of the speakers' utterances, productions made when auditory feedback was available were evaluated to be closer to the subjects' normal productions than when feedback was masked. The perceptual analysis showed that speakers were able to use auditory information to partially compensate for the vocal-tract modification. Furthermore, utterances produced during the masked conditions also improved over a session, demonstrating that the compensatory articulations were learned and available after auditory feedback was removed.

  4. Effect of GIS Learning on Spatial Thinking

    ERIC Educational Resources Information Center

    Lee, Jongwon; Bednarz, Robert

    2009-01-01

    A spatial-skills test is used to examine the effect of GIS learning on the spatial thinking ability of college students. Eighty students at a large state university completed pre- and post- spatial-skills tests administered during the 2003 fall semester. Analysis of changes in the students' test scores revealed that GIS learning helped students…

  5. Subthreshold resonance properties contribute to the efficient coding of auditory spatial cues.

    PubMed

    Remme, Michiel W H; Donato, Roberta; Mikiel-Hunter, Jason; Ballestero, Jimena A; Foster, Simon; Rinzel, John; McAlpine, David

    2014-06-01

    Neurons in the medial superior olive (MSO) and lateral superior olive (LSO) of the auditory brainstem code for sound-source location in the horizontal plane, extracting interaural time differences (ITDs) from the stimulus fine structure and interaural level differences (ILDs) from the stimulus envelope. Here, we demonstrate a postsynaptic gradient in temporal processing properties across the presumed tonotopic axis; neurons in the MSO and the low-frequency limb of the LSO exhibit fast intrinsic electrical resonances and low input impedances, consistent with their processing of ITDs in the temporal fine structure. Neurons in the high-frequency limb of the LSO show low-pass electrical properties, indicating they are better suited to extracting information from the slower, modulated envelopes of sounds. Using a modeling approach, we assess ITD and ILD sensitivity of the neural filters to natural sounds, demonstrating that the transformation in temporal processing along the tonotopic axis contributes to efficient extraction of auditory spatial cues. PMID:24843153

  6. Experience-dependent learning of auditory temporal resolution: evidence from Carnatic-trained musicians.

    PubMed

    Mishra, Srikanta K; Panda, Manasa R

    2014-01-22

    Musical training and experience greatly enhance the cortical and subcortical processing of sounds, which may translate to superior auditory perceptual acuity. Auditory temporal resolution is a fundamental perceptual aspect that is critical for speech understanding in noise in listeners with normal hearing, auditory disorders, cochlear implants, and language disorders, yet very few studies have focused on music-induced learning of temporal resolution. This report demonstrates that Carnatic musical training and experience have a significant impact on temporal resolution assayed by gap detection thresholds. This experience-dependent learning in Carnatic-trained musicians exhibits the universal aspects of human perception and plasticity. The present work adds the perceptual component to a growing body of neurophysiological and imaging studies that suggest plasticity of the peripheral auditory system at the level of the brainstem. The present work may be intriguing to researchers and clinicians alike interested in devising cross-cultural training regimens to alleviate listening-in-noise difficulties. PMID:24264076

  7. Representation of Early Sensory Experience in the Adult Auditory Midbrain: Implications for Vocal Learning

    PubMed Central

    van der Kant, Anne; Derégnaucourt, Sébastien; Gahr, Manfred; Van der Linden, Annemie; Poirier, Colline

    2013-01-01

    Vocal learning in songbirds and humans occurs by imitation of adult vocalizations. In both groups, vocal learning includes a perceptual phase during which juveniles birds and infants memorize adult vocalizations. Despite intensive research, the neural mechanisms supporting this auditory memory are still poorly understood. The present functional MRI study demonstrates that in adult zebra finches, the right auditory midbrain nucleus responds selectively to the copied vocalizations. The selective signal is distinct from selectivity for the bird's own song and does not simply reflect acoustic differences between the stimuli. Furthermore, the amplitude of the selective signal is positively correlated with the strength of vocal learning, measured by the amount of song that experimental birds copied from the adult model. These results indicate that early sensory experience can generate a long-lasting memory trace in the auditory midbrain of songbirds that may support song learning. PMID:23637903

  8. A Latent Consolidation Phase in Auditory Identification Learning: Time in the Awake State Is Sufficient

    ERIC Educational Resources Information Center

    Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi

    2005-01-01

    Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…

  9. Hearing impairment induces frequency-specific adjustments in auditory spatial tuning in the optic tectum of young owls.

    PubMed

    Gold, J I; Knudsen, E I

    1999-11-01

    Bimodal, auditory-visual neurons in the optic tectum of the barn owl are sharply tuned for sound source location. The auditory receptive fields (RFs) of these neurons are restricted in space primarily as a consequence of their tuning for interaural time differences and interaural level differences across broad ranges of frequencies. In this study, we examined the extent to which frequency-specific features of early auditory experience shape the auditory spatial tuning of these neurons. We manipulated auditory experience by implanting in one ear canal an acoustic filtering device that altered the timing and level of sound reaching the eardrum in a frequency-dependent fashion. We assessed the auditory spatial tuning at individual tectal sites in normal owls and in owls raised with the filtering device. At each site, we measured a family of auditory RFs using broadband sound and narrowband sounds with different center frequencies both with and without the device in place. In normal owls, the narrowband RFs for a given site all included a common region of space that corresponded with the broadband RF and aligned with the site's visual RF. Acute insertion of the filtering device in normal owls shifted the locations of the narrowband RFs away from the visual RF, the magnitude and direction of the shifts depending on the frequency of the stimulus. In contrast, in owls that were raised wearing the device, narrowband and broadband RFs were aligned with visual RFs so long as the device was in the ear but not after it was removed, indicating that auditory spatial tuning had been adaptively altered by experience with the device. The frequency tuning of tectal neurons in device-reared owls was also altered from normal. The results demonstrate that experience during development adaptively modifies the representation of auditory space in the barn owl's optic tectum in a frequency-dependent manner. PMID:10561399

  10. Interference between postural control and spatial vs. non-spatial auditory reaction time tasks in older adults.

    PubMed

    Fuhrman, Susan I; Redfern, Mark S; Jennings, J Richard; Furman, Joseph M

    2015-01-01

    This study investigated whether spatial aspects of an information processing task influence dual-task interference. Two groups (Older/Young) of healthy adults participated in dual-task experiments. Two auditory information processing tasks included a frequency discrimination choice reaction time task (non-spatial task) and a lateralization choice reaction time task (spatial task). Postural tasks included combinations of standing with eyes open or eyes closed on either a fixed floor or a sway-referenced floor. Reaction times and postural sway via center of pressure were recorded. Baseline measures of reaction time and sway were subtracted from the corresponding dual-task results to calculate reaction time task costs and postural task costs. Reaction time task cost increased with eye closure (p = 0.01), sway-referenced flooring (p < 0.0001), and the spatial task (p = 0.04). Additionally, a significant (p = 0.05) task x vision x age interaction indicated that older subjects had a significant vision X task interaction whereas young subjects did not. However, when analyzed by age group, the young group showed minimal differences in interference for the spatial and non-spatial tasks with eyes open, but showed increased interference on the spatial relative to non-spatial task with eyes closed. On the contrary, older subjects demonstrated increased interference on the spatial relative to the non-spatial task with eyes open, but not with eyes closed. These findings suggest that visual-spatial interference may occur in older subjects when vision is used to maintain posture. PMID:26410669

  11. Registration of neural maps through value-dependent learning: modeling the alignment of auditory and visual maps in the barn owl's optic tectum.

    PubMed

    Rucci, M; Tononi, G; Edelman, G M

    1997-01-01

    In the optic tectum (OT) of the barn owl, visual and auditory maps of space are found in close alignment with each other. Experiments in which such alignment has been disrupted have shown a considerable degree of plasticity in the auditory map. The external nucleus of the inferior colliculus (ICx), an auditory center that projects massively to the tectum, is the main site of plasticity; however, it is unclear by what mechanisms the alignment between the auditory map in the ICx and the visual map in the tectum is established and maintained. In this paper, we propose that such map alignment occurs through a process of value-dependent learning. According to this paradigm, value systems, identifiable with neuromodulatory systems having diffuse projections, respond to innate or acquired salient cues and modulate changes in synaptic efficacy in many brain regions. To test the self-consistency of this proposal, we have developed a computer model of the principal neural structures involved in the process of auditory localization in the barn owl. This is complemented by simulations of aspects of the barn owl phenotype and of the experimental environment. In the model, a value system is activated whenever the owl carries out a foveation toward an auditory stimulus. A term representing the diffuse release of a neuromodulator interacts with local pre- and postsynaptic events to determine synaptic changes in the ICx. Through large-scale simulations, we have replicated a number of experimental observations on the development of spatial alignment between the auditory and visual maps during normal visual experience, after the retinal image is shifted through prismatic goggles, and after the reestablishment of normal visual input. The results suggest that value-dependent learning is sufficient to account for the registration of auditory and visual maps of space in the OT of the barn owl, and they lead to a number of experimental predictions. PMID:8987759

  12. The time-course of distractor processing in auditory spatial negative priming.

    PubMed

    Möller, Malte; Mayr, Susanne; Buchner, Axel

    2016-09-01

    The spatial negative priming effect denotes slowed-down and sometimes more error-prone responding to a location that previously contained a distractor as compared with a previously unoccupied location. In vision, this effect has been attributed to the inhibition of irrelevant locations, and recently, of their task-assigned responses. Interestingly, auditory versions of the task did not yield evidence for inhibitory processing of task-irrelevant events which might suggest modality-specific distractor processing in vision and audition. Alternatively, the inhibitory processes may differ in how they develop over time. If this were the case, the absence of inhibitory after-effects might be due to an inappropriate timing of successive presentations in previous auditory spatial negative priming tasks. Specifically, the distractor may not yet have been inhibited or inhibition may already have dissipated at the time performance is assessed. The present study was conducted to test these alternatives. Participants indicated the location of a target sound in the presence of a concurrent distractor sound. Performance was assessed between two successive prime-probe presentations. The time between the prime response and the probe sounds (response-stimulus interval, RSI) was systematically varied between three groups (600, 1250, 1900 ms). For all RSI groups, the results showed no evidence for inhibitory distractor processing but conformed to the predictions of the feature mismatching hypothesis. The results support the assumption that auditory distractor processing does not recruit an inhibitory mechanism but involves the integration of spatial and sound identity features into common representations. PMID:26233234

  13. Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals.

    PubMed

    Genzel, Daria; Firzlaff, Uwe; Wiegrebe, Lutz; MacNeilage, Paul R

    2016-08-01

    Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating. PMID:27169504

  14. Influence of age, spatial memory, and ocular fixation on localization of auditory, visual, and bimodal targets by human subjects.

    PubMed

    Dobreva, Marina S; O'Neill, William E; Paige, Gary D

    2012-12-01

    A common complaint of the elderly is difficulty identifying and localizing auditory and visual sources, particularly in competing background noise. Spatial errors in the elderly may pose challenges and even threats to self and others during everyday activities, such as localizing sounds in a crowded room or driving in traffic. In this study, we investigated the influence of aging, spatial memory, and ocular fixation on the localization of auditory, visual, and combined auditory-visual (bimodal) targets. Head-restrained young and elderly subjects localized targets in a dark, echo-attenuated room using a manual laser pointer. Localization accuracy and precision (repeatability) were quantified for both ongoing and transient (remembered) targets at response delays up to 10 s. Because eye movements bias auditory spatial perception, localization was assessed under target fixation (eyes free, pointer guided by foveal vision) and central fixation (eyes fixed straight ahead, pointer guided by peripheral vision) conditions. Spatial localization across the frontal field in young adults demonstrated (1) horizontal overshoot and vertical undershoot for ongoing auditory targets under target fixation conditions, but near-ideal horizontal localization with central fixation; (2) accurate and precise localization of ongoing visual targets guided by foveal vision under target fixation that degraded when guided by peripheral vision during central fixation; (3) overestimation in horizontal central space (±10°) of remembered auditory, visual, and bimodal targets with increasing response delay. In comparison with young adults, elderly subjects showed (1) worse precision in most paradigms, especially when localizing with peripheral vision under central fixation; (2) greatly impaired vertical localization of auditory and bimodal targets; (3) increased horizontal overshoot in the central field for remembered visual and bimodal targets across response delays; (4) greater vulnerability to

  15. Implicit learning of between-group intervals in auditory temporal structures.

    PubMed

    Terry, J; Stevens, C J; Weidemann, G; Tillmann, B

    2016-08-01

    Implicit learning of temporal structure has primarily been reported when events within a sequence (e.g., visual-spatial locations, tones) are systematically ordered and correlated with the temporal structure. An auditory serial reaction time task was used to investigate implicit learning of temporal intervals between pseudorandomly ordered syllables. Over exposure, participants identified syllables presented in sequences with weakly metrical temporal structures. In a test block, the temporal structure differed from exposure only in the duration of the interonset intervals (IOIs) between groups. It was hypothesized that reaction time (RT) to syllables following between-group IOIs would decrease with exposure and increase at test. In Experiments 1 and 2, the sequences presented over exposure and test were counterbalanced across participants (Pattern 1 and Pattern 2 conditions). An RT increase at test to syllables following between-group IOIs was only evident in the condition that presented an exposure structure with a slightly stronger meter (Pattern 1 condition). The Pattern 1 condition also elicited a global expectancy effect: Test block RT slowed to earlier-than-expected syllables (i.e., syllables shifted to an earlier beat) but not to later-than-expected syllables. Learning of between-group IOIs and the global expectancy effect extended to the Pattern 2 condition when meter was strengthened with an external pulse (Experiment 2). Experiment 3 further demonstrated implicit learning of a new weakly metrical structure with only earlier-than-expected violations at test. Overall findings demonstrate learning of weakly metrical rhythms without correlated event structures (i.e., sequential syllable orders). They further suggest the presence of a global expectancy effect mediated by metrical strength. PMID:27301354

  16. Localized Brain Activation Related to the Strength of Auditory Learning in a Parrot

    PubMed Central

    Matsushita, Masanori; Matsuda, Yasushi; Takeuchi, Hiro-Aki; Satoh, Ryohei; Watanabe, Aiko; Zandbergen, Matthijs A.; Manabe, Kazuchika; Kawashima, Takashi; Bolhuis, Johan J.

    2012-01-01

    Parrots and songbirds learn their vocalizations from a conspecific tutor, much like human infants acquire spoken language. Parrots can learn human words and it has been suggested that they can use them to communicate with humans. The caudomedial pallium in the parrot brain is homologous with that of songbirds, and analogous to the human auditory association cortex, involved in speech processing. Here we investigated neuronal activation, measured as expression of the protein product of the immediate early gene ZENK, in relation to auditory learning in the budgerigar (Melopsittacus undulatus), a parrot. Budgerigar males successfully learned to discriminate two Japanese words spoken by another male conspecific. Re-exposure to the two discriminanda led to increased neuronal activation in the caudomedial pallium, but not in the hippocampus, compared to untrained birds that were exposed to the same words, or were not exposed to words. Neuronal activation in the caudomedial pallium of the experimental birds was correlated significantly and positively with the percentage of correct responses in the discrimination task. These results suggest that in a parrot, the caudomedial pallium is involved in auditory learning. Thus, in parrots, songbirds and humans, analogous brain regions may contain the neural substrate for auditory learning and memory. PMID:22701714

  17. Sensorimotor learning in children and adults: Exposure to frequency-altered auditory feedback during speech production.

    PubMed

    Scheerer, N E; Jacobson, D S; Jones, J A

    2016-02-01

    Auditory feedback plays an important role in the acquisition of fluent speech; however, this role may change once speech is acquired and individuals no longer experience persistent developmental changes to the brain and vocal tract. For this reason, we investigated whether the role of auditory feedback in sensorimotor learning differs across children and adult speakers. Participants produced vocalizations while they heard their vocal pitch predictably or unpredictably shifted downward one semitone. The participants' vocal pitches were measured at the beginning of each vocalization, before auditory feedback was available, to assess the extent to which the deviant auditory feedback modified subsequent speech motor commands. Sensorimotor learning was observed in both children and adults, with participants' initial vocal pitch increasing following trials where they were exposed to predictable, but not unpredictable, frequency-altered feedback. Participants' vocal pitch was also measured across each vocalization, to index the extent to which the deviant auditory feedback was used to modify ongoing vocalizations. While both children and adults were found to increase their vocal pitch following predictable and unpredictable changes to their auditory feedback, adults produced larger compensatory responses. The results of the current study demonstrate that both children and adults rapidly integrate information derived from their auditory feedback to modify subsequent speech motor commands. However, these results also demonstrate that children and adults differ in their ability to use auditory feedback to generate compensatory vocal responses during ongoing vocalization. Since vocal variability also differed across the children and adult groups, these results also suggest that compensatory vocal responses to frequency-altered feedback manipulations initiated at vocalization onset may be modulated by vocal variability. PMID:26628403

  18. Utilising reinforcement learning to develop strategies for driving auditory neural implants

    NASA Astrophysics Data System (ADS)

    Lee, Geoffrey W.; Zambetta, Fabio; Li, Xiaodong; Paolini, Antonio G.

    2016-08-01

    Objective. In this paper we propose a novel application of reinforcement learning to the area of auditory neural stimulation. We aim to develop a simulation environment which is based off real neurological responses to auditory and electrical stimulation in the cochlear nucleus (CN) and inferior colliculus (IC) of an animal model. Using this simulator we implement closed loop reinforcement learning algorithms to determine which methods are most effective at learning effective acoustic neural stimulation strategies. Approach. By recording a comprehensive set of acoustic frequency presentations and neural responses from a set of animals we created a large database of neural responses to acoustic stimulation. Extensive electrical stimulation in the CN and the recording of neural responses in the IC provides a mapping of how the auditory system responds to electrical stimuli. The combined dataset is used as the foundation for the simulator, which is used to implement and test learning algorithms. Main results. Reinforcement learning, utilising a modified n-Armed Bandit solution, is implemented to demonstrate the model’s function. We show the ability to effectively learn stimulation patterns which mimic the cochlea’s ability to covert acoustic frequencies to neural activity. Time taken to learn effective replication using neural stimulation takes less than 20 min under continuous testing. Significance. These results show the utility of reinforcement learning in the field of neural stimulation. These results can be coupled with existing sound processing technologies to develop new auditory prosthetics that are adaptable to the recipients current auditory pathway. The same process can theoretically be abstracted to other sensory and motor systems to develop similar electrical replication of neural signals.

  19. Spatial organization of excitatory synaptic inputs to layer 4 neurons in mouse primary auditory cortex.

    PubMed

    Kratz, Megan B; Manis, Paul B

    2015-01-01

    Layer 4 (L4) of primary auditory cortex (A1) receives a tonotopically organized projection from the medial geniculate nucleus of the thalamus. However, individual neurons in A1 respond to a wider range of sound frequencies than would be predicted by their thalamic input, which suggests the existence of cross-frequency intracortical networks. We used laser scanning photostimulation and uncaging of glutamate in brain slices of mouse A1 to characterize the spatial organization of intracortical inputs to L4 neurons. Slices were prepared to include the entire tonotopic extent of A1. We find that L4 neurons receive local vertically organized (columnar) excitation from layers 2 through 6 (L6) and horizontally organized excitation primarily from L4 and L6 neurons in regions centered ~300-500 μm caudal and/or rostral to the cell. Excitatory horizontal synaptic connections from layers 2 and 3 were sparse. The origins of horizontal projections from L4 and L6 correspond to regions in the tonotopic map that are approximately an octave away from the target cell location. Such spatially organized lateral connections may contribute to the detection and processing of auditory objects with specific spectral structures. PMID:25972787

  20. Spatial organization of excitatory synaptic inputs to layer 4 neurons in mouse primary auditory cortex

    PubMed Central

    Kratz, Megan B.; Manis, Paul B.

    2015-01-01

    Layer 4 (L4) of primary auditory cortex (A1) receives a tonotopically organized projection from the medial geniculate nucleus of the thalamus. However, individual neurons in A1 respond to a wider range of sound frequencies than would be predicted by their thalamic input, which suggests the existence of cross-frequency intracortical networks. We used laser scanning photostimulation and uncaging of glutamate in brain slices of mouse A1 to characterize the spatial organization of intracortical inputs to L4 neurons. Slices were prepared to include the entire tonotopic extent of A1. We find that L4 neurons receive local vertically organized (columnar) excitation from layers 2 through 6 (L6) and horizontally organized excitation primarily from L4 and L6 neurons in regions centered ~300–500 μm caudal and/or rostral to the cell. Excitatory horizontal synaptic connections from layers 2 and 3 were sparse. The origins of horizontal projections from L4 and L6 correspond to regions in the tonotopic map that are approximately an octave away from the target cell location. Such spatially organized lateral connections may contribute to the detection and processing of auditory objects with specific spectral structures. PMID:25972787

  1. Developmental shifts in gene expression in the auditory forebrain during the sensitive period for song learning

    PubMed Central

    London, Sarah E.; Dong, Shu; Replogle, Kirstin; Clayton, David F.

    2009-01-01

    A male zebra finch begins to learn to sing by memorizing a tutor’s song during a sensitive period in juvenile development. Tutor song memorization requires molecular signaling within the auditory forebrain. Using microarray and in situ hybridizations, we tested whether the auditory forebrain at an age just prior to tutoring expresses a different set of genes compared to later in life after song learning has ceased. Microarray analysis revealed differences in expression of thousands of genes in the male auditory forebrain at posthatch day 20 (P20) compared to adulthood. Further, song playbacks had essentially no impact on gene expression in P20 auditory forebrain, but altered expression of hundreds of genes in adults. Most genes that were song-responsive in adults were expressed at constitutively high levels at P20. Using in situ hybridization with a representative sample of 44 probes, we confirmed these effects and found that birds at P20 and P45 were similar in their gene expression patterns. Additionally, 8 of the probes showed male-female differences in expression. We conclude that the developing auditory forebrain is in a very different molecular state from the adult, despite its relatively mature gross morphology and electrophysiological responsiveness to song stimuli. Developmental gene expression changes may contribute to fine-tuning of cellular and molecular properties necessary for song learning. PMID:19360720

  2. Is the auditory evoked P2 response a biomarker of learning?

    PubMed

    Tremblay, Kelly L; Ross, Bernhard; Inoue, Kayo; McClannahan, Katrina; Collet, Gregory

    2014-01-01

    Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography (EEG) and magnetoencephalography (MEG) have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP), as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What's more, these effects are retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN) wave 600-900 ms post-stimulus onset, post-training exclusively for the group that learned to identify the pre-voiced contrast

  3. Modulation of Auditory Spatial Attention by Angry Prosody: An fMRI Auditory Dot-Probe Study.

    PubMed

    Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier

    2016-01-01

    Emotional stimuli have been shown to modulate attentional orienting through signals sent by subcortical brain regions that modulate visual perception at early stages of processing. Fewer studies, however, have investigated a similar effect of emotional stimuli on attentional orienting in the auditory domain together with an investigation of brain regions underlying such attentional modulation, which is the general aim of the present study. Therefore, we used an original auditory dot-probe paradigm involving simultaneously presented neutral and angry non-speech vocal utterances lateralized to either the left or the right auditory space, immediately followed by a short and lateralized single sine wave tone presented in the same (valid trial) or in the opposite space as the preceding angry voice (invalid trial). Behavioral results showed an expected facilitation effect for target detection during valid trials while functional data showed greater activation in the middle and posterior superior temporal sulci (STS) and in the medial frontal cortex for valid vs. invalid trials. The use of reaction time facilitation [absolute value of the Z-score of valid-(invalid+neutral)] as a group covariate extended enhanced activity in the amygdalae, auditory thalamus, and visual cortex. Taken together, our results suggest the involvement of a large and distributed network of regions among which the STS, thalamus, and amygdala are crucial for the decoding of angry prosody, as well as for orienting and maintaining attention within an auditory space that was previously primed by a vocal emotional event. PMID:27242420

  4. Modulation of Auditory Spatial Attention by Angry Prosody: An fMRI Auditory Dot-Probe Study

    PubMed Central

    Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier

    2016-01-01

    Emotional stimuli have been shown to modulate attentional orienting through signals sent by subcortical brain regions that modulate visual perception at early stages of processing. Fewer studies, however, have investigated a similar effect of emotional stimuli on attentional orienting in the auditory domain together with an investigation of brain regions underlying such attentional modulation, which is the general aim of the present study. Therefore, we used an original auditory dot-probe paradigm involving simultaneously presented neutral and angry non-speech vocal utterances lateralized to either the left or the right auditory space, immediately followed by a short and lateralized single sine wave tone presented in the same (valid trial) or in the opposite space as the preceding angry voice (invalid trial). Behavioral results showed an expected facilitation effect for target detection during valid trials while functional data showed greater activation in the middle and posterior superior temporal sulci (STS) and in the medial frontal cortex for valid vs. invalid trials. The use of reaction time facilitation [absolute value of the Z-score of valid-(invalid+neutral)] as a group covariate extended enhanced activity in the amygdalae, auditory thalamus, and visual cortex. Taken together, our results suggest the involvement of a large and distributed network of regions among which the STS, thalamus, and amygdala are crucial for the decoding of angry prosody, as well as for orienting and maintaining attention within an auditory space that was previously primed by a vocal emotional event. PMID:27242420

  5. The influence of acoustic reflections from diffusive architectural surfaces on spatial auditory perception

    NASA Astrophysics Data System (ADS)

    Robinson, Philip W.

    This thesis addresses the effect of reflections from diffusive architectural surfaces on the perception of echoes and on auditory spatial resolution. Diffusive architectural surfaces play an important role in performance venue design for architectural expression and proper sound distribution. Extensive research has been devoted to the prediction and measurement of the spatial dispersion. However, previous psychoacoustic research on perception of reflections and the precedence effect has focused on specular reflections. This study compares the echo threshold of specular reflections, against those for reflections from realistic architectural surfaces, and against synthesized reflections that isolate individual qualities of reflections from diffusive surfaces, namely temporal dispersion and spectral coloration. In particular, the activation of the precedence effect, as indicated by the echo threshold is measured. Perceptual tests are conducted with direct sound, and simulated or measured reflections with varying temporal dispersion. The threshold for reflections from diffusive architectural surfaces is found to be comparable to that of a specular re ection of similar energy rather than similar amplitude. This is surprising because the amplitude of the dispersed re ection is highly attenuated, and onset cues are reduced. This effect indicates that the auditory system is integrating re ection response energy dispersed over many milliseconds into a single stream. Studies on the effect of a single diffuse reflection are then extended to a full architectural enclosure with various surface properties. This research utilizes auralizations from measured and simulated performance venues to investigate spatial discrimination of multiple acoustic sources in rooms. It is found that discriminating the lateral arrangement of two sources is possible at narrower separation angles when reflections come from at rather than diffusive surfaces. Additionally, subjective impressions are

  6. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training

    PubMed Central

    Bernstein, Lynne E.; Auer, Edward T.; Eberhardt, Silvio P.; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning. PMID:23515520

  7. Rapid Serial Auditory Presentation: A New Measure of Statistical Learning in Speech Segmentation.

    PubMed

    Franco, Ana; Eberlen, Julia; Destrebecqz, Arnaud; Cleeremans, Axel; Bertels, Julie

    2015-01-01

    The Rapid Serial Visual Presentation procedure is a method widely used in visual perception research. In this paper we propose an adaptation of this method which can be used with auditory material and enables assessment of statistical learning in speech segmentation. Adult participants were exposed to an artificial speech stream composed of statistically defined trisyllabic nonsense words. They were subsequently instructed to perform a detection task in a Rapid Serial Auditory Presentation (RSAP) stream in which they had to detect a syllable in a short speech stream. Results showed that reaction times varied as a function of the statistical predictability of the syllable: second and third syllables of each word were responded to faster than first syllables. This result suggests that the RSAP procedure provides a reliable and sensitive indirect measure of auditory statistical learning. PMID:26592534

  8. Auditory categories with separable decision boundaries are learned faster with full feedback than with minimal feedback.

    PubMed

    Yi, Han Gyol; Chandrasekaran, Bharath

    2016-08-01

    During visual category learning, full feedback (e.g., "Wrong, that was a category 4."), relative to minimal feedback (e.g., "Wrong."), enhances performance when the relevant dimensions are separable. This pattern is reversed with inseparable dimensions. Here, the interaction between trial-by-trial feedback and separability of dimensions in the auditory domain is examined. Participants were trained to categorize auditory stimuli along separable or inseparable dimensions. One group received full feedback, while the other group received minimal feedback. In the separable-dimensions condition, the full-feedback group achieved higher accuracy than did the minimal-feedback group. In the inseparable-dimensions condition, performance was equivalent across the feedback groups. These results altogether suggest that trial-by-trial feedback affects auditory category learning performance differentially for separable and inseparable categories. PMID:27586759

  9. Air flow cued spatial learning in mice.

    PubMed

    Bouchekioua, Youcef; Mimura, Masaru; Watanabe, Shigeru

    2015-01-01

    Spatial learning experiments in rodents typically employ visual cues that are associated with a goal place, even though it is now well established that they have poor visual acuity. We assessed here the possibility of spatial learning in mice based on an air flow cue in a dry version of the Morris water maze task. A miniature fan was placed at each of the four cardinal points of the circular maze, but only one blew air towards the centre of the maze. The three other fans were blowing towards their own box. The mice were able to learn the task only if the spatial relationship between the air flow cue and the position of the goal place was kept constant across trials. A change of this spatial relationship resulted in an increase in the time to find the goal place. We report here the first evidence of spatial learning relying on an air flow cue. PMID:25257773

  10. Less Is More: Latent Learning Is Maximized by Shorter Training Sessions in Auditory Perceptual Learning

    PubMed Central

    Molloy, Katharine; Moore, David R.; Sohoglu, Ediz; Amitay, Sygal

    2012-01-01

    Background The time course and outcome of perceptual learning can be affected by the length and distribution of practice, but the training regimen parameters that govern these effects have received little systematic study in the auditory domain. We asked whether there was a minimum requirement on the number of trials within a training session for learning to occur, whether there was a maximum limit beyond which additional trials became ineffective, and whether multiple training sessions provided benefit over a single session. Methodology/Principal Findings We investigated the efficacy of different regimens that varied in the distribution of practice across training sessions and in the overall amount of practice received on a frequency discrimination task. While learning was relatively robust to variations in regimen, the group with the shortest training sessions (∼8 min) had significantly faster learning in early stages of training than groups with longer sessions. In later stages, the group with the longest training sessions (>1 hr) showed slower learning than the other groups, suggesting overtraining. Between-session improvements were inversely correlated with performance; they were largest at the start of training and reduced as training progressed. In a second experiment we found no additional longer-term improvement in performance, retention, or transfer of learning for a group that trained over 4 sessions (∼4 hr in total) relative to a group that trained for a single session (∼1 hr). However, the mechanisms of learning differed; the single-session group continued to improve in the days following cessation of training, whereas the multi-session group showed no further improvement once training had ceased. Conclusions/Significance Shorter training sessions were advantageous because they allowed for more latent, between-session and post-training learning to emerge. These findings suggest that efficient regimens should use short training sessions, and

  11. Auditory Middle Latency Response and Phonological Awareness in Students with Learning Disabilities

    PubMed Central

    Romero, Ana Carla Leite; Funayama, Carolina Araújo Rodrigues; Capellini, Simone Aparecida; Frizzo, Ana Claudia Figueiredo

    2015-01-01

    Introduction Behavioral tests of auditory processing have been applied in schools and highlight the association between phonological awareness abilities and auditory processing, confirming that low performance on phonological awareness tests may be due to low performance on auditory processing tests. Objective To characterize the auditory middle latency response and the phonological awareness tests and to investigate correlations between responses in a group of children with learning disorders. Methods The study included 25 students with learning disabilities. Phonological awareness and auditory middle latency response were tested with electrodes placed on the left and right hemispheres. The correlation between the measurements was performed using the Spearman rank correlation coefficient. Results There is some correlation between the tests, especially between the Pa component and syllabic awareness, where moderate negative correlation is observed. Conclusion In this study, when phonological awareness subtests were performed, specifically phonemic awareness, the students showed a low score for the age group, although for the objective examination, prolonged Pa latency in the contralateral via was observed. Negative weak to moderate correlation for Pa wave latency was observed, as was positive weak correlation for Na-Pa amplitude. PMID:26491479

  12. ONTOGENY OF EYEBLINK CONDITIONING IN THE RAT: AUDITORY FREQUENCY AND DISCRIMINATION LEARNING EFFECTS

    EPA Science Inventory

    The present study sought to determine whether acoustic properties of the auditory conditioned stimulus (CS) or the use of a discrimination learning procedure would alter the emergence of eyeblink conditioning between Postnatal Day 17 and 24 (PND17-24) in the rat. n Experiment 1, ...

  13. Auditory Training for Experienced and Inexperienced Second-Language Learners: Native French Speakers Learning English Vowels

    ERIC Educational Resources Information Center

    Iverson, Paul; Pinet, Melanie; Evans, Bronwen G.

    2012-01-01

    This study examined whether high-variability auditory training on natural speech can benefit experienced second-language English speakers who already are exposed to natural variability in their daily use of English. The subjects were native French speakers who had learned English in school; experienced listeners were tested in England and the less…

  14. The Effect of Auditory Integration Training on the Working Memory of Adults with Different Learning Preferences

    ERIC Educational Resources Information Center

    Ryan, Tamara E.

    2014-01-01

    The purpose of this study was to determine the effects of auditory integration training (AIT) on a component of the executive function of working memory; specifically, to determine if learning preferences might have an interaction with AIT to increase the outcome for some learners. The question asked by this quantitative pretest posttest design is…

  15. Learning the Phonological Forms of New Words: Effects of Orthographic and Auditory Input

    ERIC Educational Resources Information Center

    Hayes-Harb, Rachel; Nicol, Janet; Barker, Jason

    2010-01-01

    We investigated the relationship between the phonological and orthographic representations of new words for adult learners. Three groups of native English speakers learned a set of auditorily-presented pseudowords along with pictures indicating their "meanings". They were later tested on their memory of the words via an auditory word-picture…

  16. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    ERIC Educational Resources Information Center

    Casserly, Elizabeth D.; Pisoni, David B.

    2015-01-01

    Purpose: Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method: Experiments 1 (N = 37) and 2 (N =…

  17. Auditory Spatial Discrimination and the Mismatch Negativity Response in Hearing-Impaired Individuals

    PubMed Central

    Cai, Yuexin; Zheng, Yiqing; Liang, Maojin; Zhao, Fei; Yu, Guangzheng; Liu, Yu; Chen, Yuebo; Chen, Guisheng

    2015-01-01

    The aims of the present study were to investigate the ability of hearing-impaired (HI) individuals with different binaural hearing conditions to discriminate spatial auditory-sources at the midline and lateral positions, and to explore the possible central processing mechanisms by measuring the minimal audible angle (MAA) and mismatch negativity (MMN) response. To measure MAA at the left/right 0°, 45° and 90° positions, 12 normal-hearing (NH) participants and 36 patients with sensorineural hearing loss, which included 12 patients with symmetrical hearing loss (SHL) and 24 patients with asymmetrical hearing loss (AHL) [12 with unilateral hearing loss on the left (UHLL) and 12 with unilateral hearing loss on the right (UHLR)] were recruited. In addition, 128-electrode electroencephalography was used to record the MMN response in a separate group of 60 patients (20 UHLL, 20 UHLR and 20 SHL patients) and 20 NH participants. The results showed MAA thresholds of the NH participants to be significantly lower than the HI participants. Also, a significantly smaller MAA threshold was obtained at the midline position than at the lateral position in both NH and SHL groups. However, in the AHL group, MAA threshold for the 90° position on the affected side was significantly smaller than the MMA thresholds obtained at other positions. Significantly reduced amplitudes and prolonged latencies of the MMN were found in the HI groups compared to the NH group. In addition, contralateral activation was found in the UHL group for sounds emanating from the 90° position on the affected side and in the NH group. These findings suggest that the abilities of spatial discrimination at the midline and lateral positions vary significantly in different hearing conditions. A reduced MMN amplitude and prolonged latency together with bilaterally symmetrical cortical activations over the auditory hemispheres indicate possible cortical compensatory changes associated with poor behavioral spatial

  18. Auditory attention strategy depends on target linguistic properties and spatial configurationa)

    PubMed Central

    McCloy, Daniel R.; Lee, Adrian K. C.

    2015-01-01

    Whether crossing a busy intersection or attending a large dinner party, listeners sometimes need to attend to multiple spatially distributed sound sources or streams concurrently. How they achieve this is not clear—some studies suggest that listeners cannot truly simultaneously attend to separate streams, but instead combine attention switching with short-term memory to achieve something resembling divided attention. This paper presents two oddball detection experiments designed to investigate whether directing attention to phonetic versus semantic properties of the attended speech impacts listeners' ability to divide their auditory attention across spatial locations. Each experiment uses four spatially distinct streams of monosyllabic words, variation in cue type (providing phonetic or semantic information), and requiring attention to one or two locations. A rapid button-press response paradigm is employed to minimize the role of short-term memory in performing the task. Results show that differences in the spatial configuration of attended and unattended streams interact with linguistic properties of the speech streams to impact performance. Additionally, listeners may leverage phonetic information to make oddball detection judgments even when oddballs are semantically defined. Both of these effects appear to be mediated by the overall complexity of the acoustic scene. PMID:26233011

  19. Different response patterns between auditory spectral and spatial temporal order judgment (TOJ).

    PubMed

    Fostick, Leah; Babkoff, Harvey

    2013-01-01

    Temporal order judgment (TOJ) thresholds have been widely reported as valid estimates of the temporal disparity necessary for correctly identifying the order of two stimuli. Data for two auditory TOJ paradigms are often reported in the literature: (1) spatially-based TOJ in which the order of presentation of the same stimulus to the right and left ear differs; and (2) spectrally-based TOJ in which the order of two stimuli differing in frequency is presented to one ear or to both ears simultaneously. Since the thresholds reported using the two paradigms differ, the aim of the current study was to compare their response patterns. The results from three different experiments showed that: (1) while almost none of the participants were able to perform the spatial TOJ task when ISI = 5 ms, with the spectral task, 50% reached an accuracy level of 75% when ISI = 5 ms; (2) temporal separation was only a partial predictor for performance in the spectral task, while it fully predicted performance in the spatial task; and (3) training improved performance markedly in the spectral TOJ task, but had no effect on spatial TOJ. These results suggest that the two paradigms may reflect different perceptual mechanisms. PMID:23820944

  20. Functional specialization for auditory-spatial processing in the occipital cortex of congenitally blind humans.

    PubMed

    Collignon, Olivier; Vandewalle, Gilles; Voss, Patrice; Albouy, Geneviève; Charbonneau, Geneviève; Lassonde, Maryse; Lepore, Franco

    2011-03-15

    The study of the congenitally blind (CB) represents a unique opportunity to explore experience-dependant plasticity in a sensory region deprived of its natural inputs since birth. Although several studies have shown occipital regions of CB to be involved in nonvisual processing, whether the functional organization of the visual cortex observed in sighted individuals (SI) is maintained in the rewired occipital regions of the blind has only been recently investigated. In the present functional MRI study, we compared the brain activity of CB and SI processing either the spatial or the pitch properties of sounds carrying information in both domains (i.e., the same sounds were used in both tasks), using an adaptive procedure specifically designed to adjust for performance level. In addition to showing a substantial recruitment of the occipital cortex for sound processing in CB, we also demonstrate that auditory-spatial processing mainly recruits the right cuneus and the right middle occipital gyrus, two regions of the dorsal occipital stream known to be involved in visuospatial/motion processing in SI. Moreover, functional connectivity analyses revealed that these reorganized occipital regions are part of an extensive brain network including regions known to underlie audiovisual spatial abilities (i.e., intraparietal sulcus, superior frontal gyrus). We conclude that some regions of the right dorsal occipital stream do not require visual experience to develop a specialization for the processing of spatial information and to be functionally integrated in a preexisting brain network dedicated to this ability. PMID:21368198

  1. The impact of variation in low-frequency interaural cross correlation on auditory spatial imagery in stereophonic loudspeaker reproduction

    NASA Astrophysics Data System (ADS)

    Martens, William

    2005-04-01

    Several attributes of auditory spatial imagery associated with stereophonic sound reproduction are strongly modulated by variation in interaural cross correlation (IACC) within low frequency bands. Nonetheless, a standard practice in bass management for two-channel and multichannel loudspeaker reproduction is to mix low-frequency musical content to a single channel for reproduction via a single driver (e.g., a subwoofer). This paper reviews the results of psychoacoustic studies which support the conclusion that reproduction via multiple drivers of decorrelated low-frequency signals significantly affects such important spatial attributes as auditory source width (ASW), auditory source distance (ASD), and listener envelopment (LEV). A variety of methods have been employed in these tests, including forced choice discrimination and identification, and direct ratings of both global dissimilarity and distinct attributes. Contrary to assumptions that underlie industrial standards established in 1994 by ITU-R. Recommendation BS.775-1, these findings imply that substantial stereophonic spatial information exists within audio signals at frequencies below the 80 to 120 Hz range of prescribed subwoofer cutoff frequencies, and that loudspeaker reproduction of decorrelated signals at frequencies as low as 50 Hz can have an impact upon auditory spatial imagery. [Work supported by VRQ.

  2. Potential for using visual, auditory, and olfactory cues to manage foraging behaviour and spatial distribution of rangeland livestock

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reviews the literature and reports on the current state of knowledge regarding the potential for managers to use visual (VC), auditory (AC), and olfactory (OC) cues to manage foraging behavior and spatial distribution of rangeland livestock. We present evidence that free-ranging livestock...

  3. Computer-Based Auditory Training (CBAT): Benefits for Children with Language- and Reading-Related Learning Difficulties

    ERIC Educational Resources Information Center

    Loo, Jenny Hooi Yin; Bamiou, Doris-Eva; Campbell, Nicci; Luxon, Linda M.

    2010-01-01

    This article reviews the evidence for computer-based auditory training (CBAT) in children with language, reading, and related learning difficulties, and evaluates the extent it can benefit children with auditory processing disorder (APD). Searches were confined to studies published between 2000 and 2008, and they are rated according to the level…

  4. Role of cortical neurodynamics for understanding the neural basis of motivated behavior - lessons from auditory category learning.

    PubMed

    Ohl, Frank W

    2015-04-01

    Rhythmic activity appears in the auditory cortex in both microscopic and macroscopic observables and is modulated by both bottom-up and top-down processes. How this activity serves both types of processes is largely unknown. Here we review studies that have recently improved our understanding of potential functional roles of large-scale global dynamic activity patterns in auditory cortex. The experimental paradigm of auditory category learning allowed critical testing of the hypothesis that global auditory cortical activity states are associated with endogenous cognitive states mediating the meaning associated with an acoustic stimulus rather than with activity states that merely represent the stimulus for further processing. PMID:25241212

  5. Spatial Learning and Computer Simulations in Science

    ERIC Educational Resources Information Center

    Lindgren, Robb; Schwartz, Daniel L.

    2009-01-01

    Interactive simulations are entering mainstream science education. Their effects on cognition and learning are often framed by the legacy of information processing, which emphasized amodal problem solving and conceptual organization. In contrast, this paper reviews simulations from the vantage of research on perception and spatial learning,…

  6. Effects of auditory recognition learning on the perception of vocal features in European starlings (Sturnus vulgaris)

    PubMed Central

    Daniel Meliza, C.

    2011-01-01

    Learning to recognize complex sensory signals can change the way they are perceived. European starlings (Sturnus vulgaris) recognize other starlings by their song, which consists of a series of complex, stereotyped motifs. Song recognition learning is accompanied by plasticity in secondary auditory areas, suggesting that perceptual learning is involved. Here, to investigate whether perceptual learning can be observed behaviorally, a same–different operant task was used to measure how starlings perceived small differences in motif structure. Birds trained to recognize conspecific songs were better at detecting variations in motifs from the songs they learned, even though this variation was not directly necessary to learn the associative task. Discrimination also improved as the reference stimulus was repeated multiple times. Perception of the much larger differences between different motifs was unaffected by training. These results indicate that sensory representations of motifs are enhanced when starlings learn to recognize songs. PMID:22087940

  7. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain

    PubMed Central

    Lu, Kai; Vicario, David S.

    2014-01-01

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds. PMID:25246563

  8. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain.

    PubMed

    Lu, Kai; Vicario, David S

    2014-10-01

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds. PMID:25246563

  9. Generalization of Auditory Sensory and Cognitive Learning in Typically Developing Children

    PubMed Central

    Murphy, Cristina F. B.; Moore, David R.; Schochat, Eliane

    2015-01-01

    Despite the well-established involvement of both sensory (“bottom-up”) and cognitive (“top-down”) processes in literacy, the extent to which auditory or cognitive (memory or attention) learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported “far-transfer” to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG), memory group (MG), auditory sensory group (SG), placebo group (PG; drawing, painting), and a control, untrained group (CG). Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest), most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention) training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span) within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness), as the PG and CG improved as much as the other trained groups

  10. Generalization of Auditory Sensory and Cognitive Learning in Typically Developing Children.

    PubMed

    Murphy, Cristina F B; Moore, David R; Schochat, Eliane

    2015-01-01

    Despite the well-established involvement of both sensory ("bottom-up") and cognitive ("top-down") processes in literacy, the extent to which auditory or cognitive (memory or attention) learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported "far-transfer" to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG), memory group (MG), auditory sensory group (SG), placebo group (PG; drawing, painting), and a control, untrained group (CG). Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest), most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention) training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span) within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness), as the PG and CG improved as much as the other trained groups. Further research

  11. Musical metaphors: evidence for a spatial grounding of non-literal sentences describing auditory events.

    PubMed

    Wolter, Sibylla; Dudschig, Carolin; de la Vega, Irmgard; Kaup, Barbara

    2015-03-01

    This study investigated whether the spatial terms high and low, when used in sentence contexts implying a non-literal interpretation, trigger similar spatial associations as would have been expected from the literal meaning of the words. In three experiments, participants read sentences describing either a high or a low auditory event (e.g., The soprano sings a high aria vs. The pianist plays a low note). In all Experiments, participants were asked to judge (yes/no) whether the sentences were meaningful by means of up/down (Experiments 1 and 2) or left/right (Experiment 3) key press responses. Contrary to previous studies reporting that metaphorical language understanding differs from literal language understanding with regard to simulation effects, the results show compatibility effects between sentence implied pitch height and response location. The results are in line with grounded models of language comprehension proposing that sensory motor experiences are being elicited when processing literal as well as non-literal sentences. PMID:25443988

  12. Spatial profile and differential recruitment of GABAB modulate oscillatory activity in auditory cortex

    PubMed Central

    Oswald, Anne-Marie M.; Doiron, Brent; Rinzel, John; Reyes, Alex D.

    2009-01-01

    The interplay between inhibition and excitation is at the core of cortical network activity. In many cortices, including auditory cortex (ACx), interactions between excitatory and inhibitory neurons generate synchronous network gamma oscillations (30–70 Hz). Here, we show that differences in the connection patterns and synaptic properties of excitatory-inhibitory microcircuits permit the spatial extent of network inputs to modulate the magnitude of gamma oscillations. Simultaneous multiple whole-cell recordings from connected fast-spiking (FS) interneurons and pyramidal cells (PC) in L2/3 of mouse ACx slices revealed that for intersomatic distances <50 µm, most inhibitory connections occurred in reciprocally connected (RC) pairs; at greater distances, inhibitory connections were equally likely in RC and non-reciprocally connected (nRC) pairs. Furthermore, the GABAB mediated inhibition in RC pairs was weaker than in nRC pairs. Simulations with a network model that incorporated these features showed strong, gamma-band oscillations only when the network inputs were confined to a small area. These findings suggest a novel mechanism by which oscillatory activity can be modulated by adjusting the spatial distribution of afferent input. PMID:19692606

  13. Increased Signal Complexity Improves the Breadth of Generalization in Auditory Perceptual Learning

    PubMed Central

    Brown, David J.; Proulx, Michael J.

    2013-01-01

    Perceptual learning can be specific to a trained stimulus or optimally generalized to novel stimuli with the breadth of generalization being imperative for how we structure perceptual training programs. Adapting an established auditory interval discrimination paradigm to utilise complex signals, we trained human adults on a standard interval for either 2, 4, or 10 days. We then tested the standard, alternate frequency, interval, and stereo input conditions to evaluate the rapidity of specific learning and breadth of generalization over the time course. In comparison with previous research using simple stimuli, the speed of perceptual learning and breadth of generalization were more rapid and greater in magnitude, including novel generalization to an alternate temporal interval within stimulus type. We also investigated the long term maintenance of learning and found that specific and generalized learning was maintained over 3 and 6 months. We discuss these findings regarding stimulus complexity in perceptual learning and how they can inform the development of effective training protocols. PMID:24349800

  14. Developmental stress impairs performance on an association task in male and female songbirds, but impairs auditory learning in females only.

    PubMed

    Farrell, Tara M; Morgan, Amanda; MacDougall-Shackleton, Scott A

    2016-01-01

    In songbirds, early-life environments critically shape song development. Many studies have demonstrated that developmental stress impairs song learning and the development of song-control regions of the brain in males. However, song has evolved through signaller-receiver networks and the effect stress has on the ability to receive auditory signals is equally important, especially for females who use song as an indicator of mate quality. Female song preferences have been the metric used to evaluate how developmental stress affects auditory learning, but preferences are shaped by many non-cognitive factors and preclude the evaluation of auditory learning abilities in males. To determine whether developmental stress specifically affects auditory learning in both sexes, we subjected juvenile European starlings, Sturnus vulgaris, to either an ad libitum or an unpredictable food supply treatment from 35 to 115 days of age. In adulthood, we assessed learning of both auditory and visual discrimination tasks. Females reared in the experimental group were slower than females in the control group to acquire a relative frequency auditory task, and slower than their male counterparts to acquire an absolute frequency auditory task. There was no difference in auditory performance between treatment groups for males. However, on the colour association task, birds from the experimental group committed more errors per trial than control birds. There was no correlation in performance across the cognitive tasks. Developmental stress did not affect all cognitive processes equally across the sexes. Our results suggest that the male auditory system may be more robust to developmental stress than that of females. PMID:26238792

  15. Physical exercise, neuroplasticity, spatial learning and memory.

    PubMed

    Cassilhas, Ricardo C; Tufik, Sergio; de Mello, Marco Túlio

    2016-03-01

    There has long been discussion regarding the positive effects of physical exercise on brain activity. However, physical exercise has only recently begun to receive the attention of the scientific community, with major interest in its effects on the cognitive functions, spatial learning and memory, as a non-drug method of maintaining brain health and treating neurodegenerative and/or psychiatric conditions. In humans, several studies have shown the beneficial effects of aerobic and resistance exercises in adult and geriatric populations. More recently, studies employing animal models have attempted to elucidate the mechanisms underlying neuroplasticity related to physical exercise-induced spatial learning and memory improvement, even under neurodegenerative conditions. In an attempt to clarify these issues, the present review aims to discuss the role of physical exercise in the improvement of spatial learning and memory and the cellular and molecular mechanisms involved in neuroplasticity. PMID:26646070

  16. Extreme Learning Machines for spatial environmental data

    NASA Astrophysics Data System (ADS)

    Leuenberger, Michael; Kanevski, Mikhail

    2015-12-01

    The use of machine learning algorithms has increased in a wide variety of domains (from finance to biocomputing and astronomy), and nowadays has a significant impact on the geoscience community. In most real cases geoscience data modelling problems are multivariate, high dimensional, variable at several spatial scales, and are generated by non-linear processes. For such complex data, the spatial prediction of continuous (or categorical) variables is a challenging task. The aim of this paper is to investigate the potential of the recently developed Extreme Learning Machine (ELM) for environmental data analysis, modelling and spatial prediction purposes. An important contribution of this study deals with an application of a generic self-consistent methodology for environmental data driven modelling based on Extreme Learning Machine. Both real and simulated data are used to demonstrate applicability of ELM at different stages of the study to understand and justify the results.

  17. Auditory tuning for spatial cues in the barn owl basal ganglia.

    PubMed

    Cohen, Y E; Knudsen, E I

    1994-07-01

    1. The basal ganglia are known to contribute to spatially guided behavior. In this study, we investigated the auditory response properties of neurons in the barn owl paleostriatum augmentum (PA), the homologue of the mammalian striatum. The data suggest that the barn owl PA is specialized to process spatial cues and, like the mammalian striatum, is involved in spatial behavior. 2. Single- and multiunit sites were recorded extracellularly in ketamine-anesthetized owls. Spatial receptive fields were measured with a free-field sound source, and tuning for frequency and interaural differences in timing (ITD) and level (ILD) was assessed using digitally synthesized dichotic stimuli. 3. Spatial receptive fields measured at nine multiunit sites were tuned to restricted regions of space: tuning widths at half-maximum response averaged 22 +/- 9.6 degrees (mean +/- SD) in azimuth and 54 +/- 22 degrees in elevation. 4. PA sites responded strongly to broadband sounds. When frequency tuning could be measured (n = 145/201 sites), tuning was broad, averaging 2.7 kHz at half-maximum response, and tended to be centered near the high end of the owl's audible range. The mean best frequency was 6.2 kHz. 5. All PA sites (n = 201) were selective for both ITD and ILD. ITD tuning curves typically exhibited a single, large "primary" peak and often smaller, "secondary" peaks at ITDs ipsilateral and/or contralateral to the primary peak. Three indices quantified the selectivity of PA sites for ITD. The first index, which was the percent difference between the minimum and maximum response as a function of ITD, averaged 100 +/- 29%. The second index, which represented the size of the largest secondary peak relative to that of the primary peak, averaged 49 +/- 23%. The third index, which was the width of the primary ITD peak at half-maximum response, averaged only 66 +/- 35 microseconds. 6. The majority (96%; n = 192/201) of PA sites were tuned to a single "best" value of ILD. The widths of ILD

  18. Auditory evoked potential: a proposal for further evaluation in children with learning disabilities

    PubMed Central

    Frizzo, Ana C. F.

    2015-01-01

    The information presented in this paper demonstrates the author’s experience in previews cross-sectional studies conducted in Brazil, in comparison with the current literature. Over the last 10 years, auditory evoked potential (AEP) has been used in children with learning disabilities. This method is critical to analyze the quality of the processing in time and indicates the specific neural demands and circuits of the sensorial and cognitive process in this clinical population. Some studies with children with dyslexia and learning disabilities were shown here to illustrate the use of AEP in this population. PMID:26113833

  19. Learning to play a melody: an fMRI study examining the formation of auditory-motor associations.

    PubMed

    Chen, Joyce L; Rae, Charlotte; Watkins, Kate E

    2012-01-16

    Interactions between the auditory and motor systems are important for music and speech, and may be especially relevant when one learns to associate sounds with movements such as when learning to play a musical instrument. However, little is known about the neural substrates underlying auditory-motor learning. This study used fMRI to investigate the formation of auditory-motor associations while participants with no musical training learned to play a melody. Listening to melodies before and after training activated the superior temporal gyrus bilaterally, but neural activity in this region was significantly reduced on the right when participants listened to the trained melody. When playing melodies and random sequences, activity in the left dorsal premotor cortex (PMd) was reduced in the late compared to early phase of training; learning to play the melody was also associated with reduced neural activity in the left ventral premotor cortex (PMv). Participants with the highest performance scores for learning the melody showed more reduced neural activity in the left PMd and PMv. Learning to play a melody or random sequence involves acquiring conditional associations between key-presses and their corresponding musical pitches, and is related to activity in the PMd. Learning to play a melody additionally involves acquisition of a learned auditory-motor sequence and is related to activity in the PMv. Together, these findings demonstrate that auditory-motor learning is related to the reduction of neural activity in brain regions of the dorsal auditory action stream, which suggests increased efficiency in neural processing of a learned stimulus. PMID:21871571

  20. Multiplicative auditory spatial receptive fields created by a hierarchy of population codes.

    PubMed

    Fischer, Brian J; Anderson, Charles H; Peña, José Luis

    2009-01-01

    A multiplicative combination of tuning to interaural time difference (ITD) and interaural level difference (ILD) contributes to the generation of spatially selective auditory neurons in the owl's midbrain. Previous analyses of multiplicative responses in the owl have not taken into consideration the frequency-dependence of ITD and ILD cues that occur under natural listening conditions. Here, we present a model for the responses of ITD- and ILD-sensitive neurons in the barn owl's inferior colliculus which satisfies constraints raised by experimental data on frequency convergence, multiplicative interaction of ITD and ILD, and response properties of afferent neurons. We propose that multiplication between ITD- and ILD-dependent signals occurs only within frequency channels and that frequency integration occurs using a linear-threshold mechanism. The model reproduces the experimentally observed nonlinear responses to ITD and ILD in the inferior colliculus, with greater accuracy than previous models. We show that linear-threshold frequency integration allows the system to represent multiple sound sources with natural sound localization cues, whereas multiplicative frequency integration does not. Nonlinear responses in the owl's inferior colliculus can thus be generated using a combination of cellular and network mechanisms, showing that multiple elements of previous theories can be combined in a single system. PMID:19956693

  1. The auditory spatial acuity of the domestic cat in the interaural horizontal and median vertical planes.

    PubMed

    Martin, R L; Webster, W R

    1987-01-01

    The auditory spatial acuity of the domestic cat in the interaural horizontal plane was examined using broadband noise and nine pure-tone stimuli ranging in frequency from 0.5 to 32 kHz. Acuity in the median vertical plane was also examined using broadband noise and three pure tones of frequencies 2, 8 and 16 kHz. Minimum audible angles (MAAs) for a reference source directly in front of an animal were measured in the horizontal plane for five cats and in the vertical plane for four. The smallest MAAs measured were those for the noise stimulus, for which MAAs in the horizontal and vertical planes were similar in magnitude. Horizontal plane MAAs for low-frequency tones were smaller than those for high, and the pattern of MAA change with frequency was consistent with the use of interaural phase and sound pressure level difference cues to localize low- and high-frequency tones, respectively. Three of the four cats trained on the vertical plane MAA task did not achieve criterion performance for any of the three pure tones, and the MAAs obtained from the fourth cat at each frequency were relatively large. Vertical plane performance was consistent with the use of spectral transformation cues to discern the elevation of a complex stimulus. PMID:3680067

  2. Relationships between measures of auditory verbal learning and executive functioning.

    PubMed

    Vanderploeg, R D; Schinka, J A; Retzlaff, P

    1994-04-01

    Relationships between performance on the California Verbal Learning Test (CVLT) and executive abilities were examined. In a sample of 115 neurological cases principal components factor analysis produced five theoretically and clinically meaningful CVLT factors. The five CVLT factors reflected general verbal learning (CVLT1), response discrimination (CVLT2), a proactive interference effect or "working memory" (CVLT3), serial learning strategy (CVLT4), and a retroactive interference effect (CVLT5). Canonical correlation between executive function measures and the five CVLT factor scores yielded one significant canonical variable accounting for 29 percent of the variance in the data. Two CVLT factors (CVLT1 and CVLT3), the Trail Making Test Part B, and Digit Span were significantly correlated with the canonical variate. Higher levels of memory performance were associated with better attention and mental tracking. Based on the present findings, attentional aspects of executive abilities appear to play a role in learning and working memory. Other aspects of executive abilities (abstraction, problem-solving, planning) appear to have minimal relationships with memory processes. PMID:8021311

  3. Learning Disabilities and the Auditory and Visual Matching Computer Program

    ERIC Educational Resources Information Center

    Tormanen, Minna R. K.; Takala, Marjatta; Sajaniemi, Nina

    2008-01-01

    This study examined whether audiovisual computer training without linguistic material had a remedial effect on different learning disabilities, like dyslexia and ADD (Attention Deficit Disorder). This study applied a pre-test-intervention-post-test design with students (N = 62) between the ages of 7 and 19. The computer training lasted eight weeks…

  4. Attentional demands modulate sensorimotor learning induced by persistent exposure to changes in auditory feedback.

    PubMed

    Scheerer, Nichole E; Tumber, Anupreet K; Jones, Jeffery A

    2016-02-01

    Hearing one's own voice is important for regulating ongoing speech and for mapping speech sounds onto articulator movements. However, it is currently unknown whether attention mediates changes in the relationship between motor commands and their acoustic output, which are necessary as growth and aging inevitably cause changes to the vocal tract. In this study, participants produced vocalizations while they heard their vocal pitch persistently shifted downward one semitone in both single- and dual-task conditions. During the single-task condition, participants vocalized while passively viewing a visual stream. During the dual-task condition, participants vocalized while also monitoring a visual stream for target letters, forcing participants to divide their attention. Participants' vocal pitch was measured across each vocalization, to index the extent to which their ongoing vocalization was modified as a result of the deviant auditory feedback. Smaller compensatory responses were recorded during the dual-task condition, suggesting that divided attention interfered with the use of auditory feedback for the regulation of ongoing vocalizations. Participants' vocal pitch was also measured at the beginning of each vocalization, before auditory feedback was available, to assess the extent to which the deviant auditory feedback was used to modify subsequent speech motor commands. Smaller changes in vocal pitch at vocalization onset were recorded during the dual-task condition, suggesting that divided attention diminished sensorimotor learning. Together, the results of this study suggest that attention is required for the speech motor control system to make optimal use of auditory feedback for the regulation and planning of speech motor commands. PMID:26655821

  5. Experimental Analysis of Spatial Learning in Goldfish

    ERIC Educational Resources Information Center

    Saito, Kotaro; Watanabe, Shigeru

    2005-01-01

    The present study examined spatial learning in goldfish using a new apparatus that was an open-field circular pool with latticed holes. The subjects were motivated to reach the baited hole. We examined gustatory cues, intramaze cues, the possibility that the subject could see the food, etc. In Experiment 1, the position of the baited hole was…

  6. Spatial Reference Frame of Incidentally Learned Attention

    ERIC Educational Resources Information Center

    Jiang, Yuhong V.; Swallow, Khena M.

    2013-01-01

    Visual attention prioritizes information presented at particular spatial locations. These locations can be defined in reference frames centered on the environment or on the viewer. This study investigates whether incidentally learned attention uses a viewer-centered or environment-centered reference frame. Participants conducted visual search on a…

  7. Sound Sequence Discrimination Learning Motivated by Reward Requires Dopaminergic D2 Receptor Activation in the Rat Auditory Cortex

    ERIC Educational Resources Information Center

    Kudoh, Masaharu; Shibuki, Katsuei

    2006-01-01

    We have previously reported that sound sequence discrimination learning requires cholinergic inputs to the auditory cortex (AC) in rats. In that study, reward was used for motivating discrimination behavior in rats. Therefore, dopaminergic inputs mediating reward signals may have an important role in the learning. We tested the possibility in the…

  8. Dissociation of Neural Networks for Predisposition and for Training-Related Plasticity in Auditory-Motor Learning.

    PubMed

    Herholz, Sibylle C; Coffey, Emily B J; Pantev, Christo; Zatorre, Robert J

    2016-07-01

    Skill learning results in changes to brain function, but at the same time individuals strongly differ in their abilities to learn specific skills. Using a 6-week piano-training protocol and pre- and post-fMRI of melody perception and imagery in adults, we dissociate learning-related patterns of neural activity from pre-training activity that predicts learning rates. Fronto-parietal and cerebellar areas related to storage of newly learned auditory-motor associations increased their response following training; in contrast, pre-training activity in areas related to stimulus encoding and motor control, including right auditory cortex, hippocampus, and caudate nuclei, was predictive of subsequent learning rate. We discuss the implications of these results for models of perceptual and of motor learning. These findings highlight the importance of considering individual predisposition in plasticity research and applications. PMID:26139842

  9. Attention Cueing and Activity Equally Reduce False Alarm Rate in Visual-Auditory Associative Learning through Improving Memory

    PubMed Central

    Haghgoo, Hojjat Allah; Azizi, Solmaz; Nili Ahmadabadi, Majid

    2016-01-01

    In our daily life, we continually exploit already learned multisensory associations and form new ones when facing novel situations. Improving our associative learning results in higher cognitive capabilities. We experimentally and computationally studied the learning performance of healthy subjects in a visual-auditory sensory associative learning task across active learning, attention cueing learning, and passive learning modes. According to our results, the learning mode had no significant effect on learning association of congruent pairs. In addition, subjects’ performance in learning congruent samples was not correlated with their vigilance score. Nevertheless, vigilance score was significantly correlated with the learning performance of the non-congruent pairs. Moreover, in the last block of the passive learning mode, subjects significantly made more mistakes in taking non-congruent pairs as associated and consciously reported lower confidence. These results indicate that attention and activity equally enhanced visual-auditory associative learning for non-congruent pairs, while false alarm rate in the passive learning mode did not decrease after the second block. We investigated the cause of higher false alarm rate in the passive learning mode by using a computational model, composed of a reinforcement learning module and a memory-decay module. The results suggest that the higher rate of memory decay is the source of making more mistakes and reporting lower confidence in non-congruent pairs in the passive learning mode. PMID:27314235

  10. Attention Cueing and Activity Equally Reduce False Alarm Rate in Visual-Auditory Associative Learning through Improving Memory.

    PubMed

    Nikouei Mahani, Mohammad-Ali; Haghgoo, Hojjat Allah; Azizi, Solmaz; Nili Ahmadabadi, Majid

    2016-01-01

    In our daily life, we continually exploit already learned multisensory associations and form new ones when facing novel situations. Improving our associative learning results in higher cognitive capabilities. We experimentally and computationally studied the learning performance of healthy subjects in a visual-auditory sensory associative learning task across active learning, attention cueing learning, and passive learning modes. According to our results, the learning mode had no significant effect on learning association of congruent pairs. In addition, subjects' performance in learning congruent samples was not correlated with their vigilance score. Nevertheless, vigilance score was significantly correlated with the learning performance of the non-congruent pairs. Moreover, in the last block of the passive learning mode, subjects significantly made more mistakes in taking non-congruent pairs as associated and consciously reported lower confidence. These results indicate that attention and activity equally enhanced visual-auditory associative learning for non-congruent pairs, while false alarm rate in the passive learning mode did not decrease after the second block. We investigated the cause of higher false alarm rate in the passive learning mode by using a computational model, composed of a reinforcement learning module and a memory-decay module. The results suggest that the higher rate of memory decay is the source of making more mistakes and reporting lower confidence in non-congruent pairs in the passive learning mode. PMID:27314235

  11. Dorsal Hippocampus Function in Learning and Expressing a Spatial Discrimination

    ERIC Educational Resources Information Center

    White, Norman M.; Gaskin, Stephane

    2006-01-01

    Learning to discriminate between spatial locations defined by two adjacent arms of a radial maze in the conditioned cue preference paradigm requires two kinds of information: latent spatial learning when the rats explore the maze with no food available, and learning about food availability in two spatial locations when the rats are then confined…

  12. Spatial learning and memory in birds.

    PubMed

    Healy, Susan D; Hurly, T Andrew

    2004-01-01

    Behavioral ecologists, well versed in addressing functional aspects of behavior, are acknowledging more and more the attention they need also to pay to mechanistic processes. One of these is the role of cognition. Song learning and imprinting are familiar examples of behaviors for which cognition plays an important role, but attention is now turning to other behaviors and a wider diversity of species. We focus here on work that investigates the nature of spatial learning and memory in the context of behaviors such as foraging and food storing. We also briefly explore the difficulties of studying cognition in the field. The common thread to all of this work is the value of using psychological techniques as tools for assessing learning and memory abilities in order to address questions of interest to behavioral ecologists. PMID:15084814

  13. [Asymmetry and spatial specificity of auditory aftereffects following adaptation to signals simulating approach and withdrawal of sound sources].

    PubMed

    Malinina, E S

    2014-01-01

    The spatial specificity of auditory approaching and withdrawing aftereffects was investigated in an anechoic chamber. The adapting and testing stimuli were presented from loudspeakers located in front of the subject at the distance of 1.1 m (near) and 4.5 m (far) from the listener's head. Approach and withdrawal of stimuli were simulated by increasing or decreasing the amplitude of the wide-noise impulse sequence. The listeners were required to determine the movement direction of test stimulus following each 5-s adaptation period. The listeners' "withdrawal" responses were used for psychometric functions plotting and for quantitative assessment of auditory aftereffect. The data summarized for all 8 participants indicated that the asymmetry of approaching and withdrawing aftereffects depended on spatial localization of adaptor and test. The asymmetry of aftereffects was largest when adaptor and test were presented from the same loudspeaker (either near or far). Adaptation to the approach induced a directionally dependent displacement of the psychometric functions relative to control condition without adaptation and adaptation to the withdrawal was not. The magnitude of approaching aftereffect was greater when adaptor and test were located in near spatial domain than when they came from far domain. When adaptor and test were presented from the distinct loudspeakers, magnitude approaching aftereffect was decreasing in comparison to the same spatial localization, but after adaptation to withdrawal it was increasing. As a result, the directionally dependent displacements of the psychometric functions relative to control condition were observed after adaptation as to approach and to withdrawal. The discrepancy of the psychometric functions received after adaptation to approach and to withdrawal at near and far spatial domains was greater under the same localization of adaptor and test in comparison to their distinct localization. We assume that the peculiarities of

  14. Auditory Same/Different Concept Learning and Generalization in Black-Capped Chickadees (Poecile atricapillus)

    PubMed Central

    Hoeschele, Marisa; Cook, Robert G.; Guillette, Lauren M.; Hahn, Allison H.; Sturdy, Christopher B.

    2012-01-01

    Abstract concept learning was thought to be uniquely human, but has since been observed in many other species. Discriminating same from different is one abstract relation that has been studied frequently. In the current experiment, using operant conditioning, we tested whether black-capped chickadees (Poecile atricapillus) could discriminate sets of auditory stimuli based on whether all the sounds within a sequence were the same or different from one another. The chickadees were successful at solving this same/different relational task, and transferred their learning to same/different sequences involving novel combinations of training notes and novel notes within the range of pitches experienced during training. The chickadees showed limited transfer to pitches that was not used in training, suggesting that the processing of absolute pitch may constrain their relational performance. Our results indicate, for the first time, that black-capped chickadees readily form relational auditory same and different categories, adding to the list of perceptual, behavioural, and cognitive abilities that make this species an important comparative model for human language and cognition. PMID:23077660

  15. The Application of an Animal Auditory Training Method as an Interchangeable Auditory Processing Learning Method for Children with Autism

    ERIC Educational Resources Information Center

    Adams, Deborah L.

    2012-01-01

    While the prevalence of autism continues to increase, there is a growing need for techniques that facilitate teaching this challenging population. The use of visual systems and prompting has been prevalent as well as effective; however, the use of auditory systems has been lacking in investigation. Ten children between the chronological ages of 4…

  16. Neural Changes Associated with Nonspeech Auditory Category Learning Parallel Those of Speech Category Acquisition

    PubMed Central

    Liu, Ran; Holt, Lori L.

    2010-01-01

    Native language experience plays a critical role in shaping speech categorization, but the exact mechanisms by which it does so are not well understood. Investigating category learning of nonspeech sounds with which listeners have no prior experience allows their experience to be systematically controlled in a way that is impossible to achieve by studying natural speech acquisition, and it provides a means of probing the boundaries and constraints that general auditory perception and cognition bring to the task of speech category learning. In this study, we used a multimodal, video-game-based implicit learning paradigm to train participants to categorize acoustically complex, nonlinguistic sounds. Mismatch negativity responses to the nonspeech stimuli were collected before and after training to investigate the degree to which neural changes supporting the learning of these nonspeech categories parallel those typically observed for speech category acquisition. Results indicate that changes in mismatch negativity resulting from the nonspeech category learning closely resemble patterns of change typically observed during speech category learning. This suggests that the often-observed “specialized” neural responses to speech sounds may result, at least in part, from the expertise we develop with speech categories through experience rathr than from properties unique to speech (e.g., linguistic or vocal tract gestural information). Furthermore, particular characteristics of the training paradigm may inform our understanding of mechanisms that support natural speech acquisition. PMID:19929331

  17. Edouard Claparède and the auditory verbal learning test.

    PubMed

    Boake, C

    2000-04-01

    This paper describes the role of the Swiss psychologist Edouard Claparède (1873-1940) in developing the Test de mémoire des mots (Test of Memory for Words), a test consisting of one free-recall trial of a 15-word list that is the antecedent of the auditory verbal learning tests (AVLT) of Rey and others. The fact that Claparède's test has survived in modified form for 80 years makes it one of the oldest mental tests in continuous use. In addition to developing the AVLT, Claparède's pioneering contributions to neuropsychology include forensic assessment of cognitive deficits and research on implicit learning in amnesia. PMID:10779842

  18. Brain dynamics that correlate with effects of learning on auditory distance perception

    PubMed Central

    Wisniewski, Matthew G.; Mercado, Eduardo; Church, Barbara A.; Gramann, Klaus; Makeig, Scott

    2014-01-01

    Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m) and far (30-m) distances. Listeners' accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC) processes identified in electroencephalographic (EEG) data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4–8 Hz power (theta event-related synchronization; ERS) that was smaller after training and largest for backwards speech. For a left temporal cluster, 8–12 Hz decreases in power (alpha event-related desynchronization; ERD) were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10–16 Hz power (upper-alpha/low-beta ERS). The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance. PMID:25538550

  19. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories.

    PubMed

    Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J

    2015-06-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721

  20. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    PubMed Central

    Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.

    2015-01-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721

  1. EXEL; Experience for Children in Learning. Parent-Directed Activities to Develop: Oral Expression, Visual Discrimination, Auditory Discrimination, Motor Coordination.

    ERIC Educational Resources Information Center

    Behrmann, Polly; Millman, Joan

    The activities collected in this handbook are planned for parents to use with their children in a learning experience. They can also be used in the classroom. Sections contain games designed to develop visual discrimination, auditory discrimination, motor coordination and oral expression. An objective is given for each game, and directions for…

  2. Spatial learning by mice in three dimensions

    PubMed Central

    Wilson, Jonathan J.; Harding, Elizabeth; Fortier, Mathilde; James, Benjamin; Donnett, Megan; Kerslake, Alasdair; O’Leary, Alice; Zhang, Ningyu; Jeffery, Kate

    2015-01-01

    We tested whether mice can represent locations distributed throughout three-dimensional space, by developing a novel three-dimensional radial arm maze. The three-dimensional radial maze, or “radiolarian” maze, consists of a central spherical core from which arms project in all directions. Mice learn to retrieve food from the ends of the arms without omitting any arms or re-visiting depleted ones. We show here that mice can learn both a standard working memory task, in which all arms are initially baited, and also a reference memory version in which only a subset are ever baited. Comparison with a two-dimensional analogue of the radiolarian maze, the hexagon maze, revealed equally good working-memory performance in both mazes if all the arms were initially baited, but reduced working and reference memory in the partially baited radiolarian maze. This suggests intact three-dimensional spatial representation in mice over short timescales but impairment of the formation and/or use of long-term spatial memory of the maze. We discuss potential mechanisms for how mice solve the three-dimensional task, and reasons for the impairment relative to its two-dimensional counterpart, concluding with some speculations about how mammals may represent three-dimensional space. PMID:25930216

  3. Reduced Sensory Oscillatory Activity during Rapid Auditory Processing as a Correlate of Language-Learning Impairment

    PubMed Central

    Heim, Sabine; Friedman, Jennifer Thomas; Keil, Andreas; Benasich, April A.

    2010-01-01

    Successful language acquisition has been hypothesized to involve the ability to integrate rapidly presented, brief acoustic cues in sensory cortex. A body of work has suggested that this ability is compromised in language-learning impairment (LLI). The present research aimed to examine sensory integration during rapid auditory processing by means of electrophysiological measures of oscillatory brain activity using data from a larger longitudinal study. Twenty-nine children with LLI and control participants with typical language development (n=18) listened to tone doublets presented at a temporal interval that is essential for accurate speech processing (70-ms interstimulus interval). The children performed a deviant (pitch change of second tone) detection task, or listened passively. The electroencephalogram was recorded from 64 electrodes. Data were source-projected to the auditory cortices and submitted to wavelet analysis, resulting in time-frequency representations of electrocortical activity. Results show significantly reduced amplitude and phase-locking of early (45–75 ms) oscillations in the gamma-band range (29–52 Hz), specifically in the LLI group, for the second stimulus of the tone doublet. This suggests altered temporal organization of sensory oscillatory activity in LLI when processing rapid sequences. PMID:21822356

  4. Spatial representation of neural responses to natural and altered conspecific vocalizations in cat auditory cortex.

    PubMed

    Gourévitch, Boris; Eggermont, Jos J

    2007-01-01

    This study shows the neural representation of cat vocalizations, natural and altered with respect to carrier and envelope, as well as time-reversed, in four different areas of the auditory cortex. Multiunit activity recorded in primary auditory cortex (AI) of anesthetized cats mainly occurred at onsets (<200-ms latency) and at subsequent major peaks of the vocalization envelope and was significantly inhibited during the stationary course of the stimuli. The first 200 ms of processing appears crucial for discrimination of a vocalization in AI. The dorsal and ventral parts of AI appear to have different roles in coding vocalizations. The dorsal part potentially discriminated carrier-altered meows, whereas the ventral part showed differences primarily in its response to natural and time-reversed meows. In the posterior auditory field, the different temporal response types of neurons, as determined by their poststimulus time histograms, showed discrimination for carrier alterations in the meow. Sustained firing neurons in the posterior ectosylvian gyrus (EP) could discriminate, among others, by neural synchrony, temporal envelope alterations of the meow, and time reversion thereof. These findings suggest an important role of EP in the detection of information conveyed by the alterations of vocalizations. Discrimination of the neural responses to different alterations of vocalizations could be based on either firing rate, type of temporal response, or neural synchrony, suggesting that all these are likely simultaneously used in processing of natural and altered conspecific vocalizations. PMID:17021022

  5. Pharmacological specialization of learned auditory responses in the inferior colliculus of the barn owl.

    PubMed

    Feldman, D E; Knudsen, E I

    1998-04-15

    Neural tuning for interaural time difference (ITD) in the optic tectum of the owl is calibrated by experience-dependent plasticity occurring in the external nucleus of the inferior colliculus (ICX). When juvenile owls are subjected to a sustained lateral displacement of the visual field by wearing prismatic spectacles, the ITD tuning of ICX neurons becomes systematically altered; ICX neurons acquire novel auditory responses, termed "learned responses," to ITD values outside their normal, pre-existing tuning range. In this study, we compared the glutamatergic pharmacology of learned responses with that of normal responses expressed by the same ICX neurons. Measurements were made in the ICX using iontophoretic application of glutamate receptor antagonists. We found that in early stages of ITD tuning adjustment, soon after learned responses had been induced by experience-dependent processes, the NMDA receptor antagonist D, L-2-amino-5-phosphonopentanoic acid (AP-5) preferentially blocked the expression of learned responses of many ICX neurons compared with that of normal responses of the same neurons. In contrast, the non-NMDA receptor antagonist 6-cyano-7-nitroquinoxaline-2,3-dione (CNQX) blocked learned and normal responses equally. After long periods of prism experience, preferential blockade of learned responses by AP-5 was no longer observed. These results indicate that NMDA receptors play a preferential role in the expression of learned responses soon after these responses have been induced by experience-dependent processes, whereas later in development or with additional prism experience (we cannot distinguish which), the differential NMDA receptor-mediated component of these responses disappears. This pharmacological progression resembles the changes that occur during maturation of glutamatergic synaptic currents during early development. PMID:9526024

  6. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    PubMed Central

    Pisoni, David B.

    2015-01-01

    Purpose Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method Experiments 1 (N = 37) and 2 (N = 21) used pre- and posttest measures of speech and nonspeech recognition to find evidence of learning (within subject) and to compare the effects of 3 kinds of training (between subject) on the perceptual abilities of adults with normal hearing listening to simulations of cochlear implant processing. Subjects were given interactive, standard lab-based, or control training experience for 1 hr between the pre- and posttest tasks (unique sets across Experiments 1 & 2). Results Subjects receiving interactive training showed significant learning on sentence recognition in quiet task (Experiment 1), outperforming controls but not lab-trained subjects following training. Training groups did not differ significantly on any other task, even those directly involved in the interactive training experience. Conclusions Interactive training has the potential to produce learning in 1 domain (sentence recognition in quiet), but the particulars of the present training method (short duration, high complexity) may have limited benefits to this single criterion task. PMID:25674884

  7. Assessing Spatial Learning and Memory in Rodents

    PubMed Central

    Vorhees, Charles V.; Williams, Michael T.

    2014-01-01

    Maneuvering safely through the environment is central to survival of almost all species. The ability to do this depends on learning and remembering locations. This capacity is encoded in the brain by two systems: one using cues outside the organism (distal cues), allocentric navigation, and one using self-movement, internal cues and nearby proximal cues, egocentric navigation. Allocentric navigation involves the hippocampus, entorhinal cortex, and surrounding structures; in humans this system encodes allocentric, semantic, and episodic memory. This form of memory is assessed in laboratory animals in many ways, but the dominant form of assessment is the Morris water maze (MWM). Egocentric navigation involves the dorsal striatum and connected structures; in humans this system encodes routes and integrated paths and, when overlearned, becomes procedural memory. In this article, several allocentric assessment methods for rodents are reviewed and compared with the MWM. MWM advantages (little training required, no food deprivation, ease of testing, rapid and reliable learning, insensitivity to differences in body weight and appetite, absence of nonperformers, control methods for proximal cue learning, and performance effects) and disadvantages (concern about stress, perhaps not as sensitive for working memory) are discussed. Evidence-based design improvements and testing methods are reviewed for both rats and mice. Experimental factors that apply generally to spatial navigation and to MWM specifically are considered. It is concluded that, on balance, the MWM has more advantages than disadvantages and compares favorably with other allocentric navigation tasks. PMID:25225309

  8. Proteome rearrangements after auditory learning: high-resolution profiling of synapse-enriched protein fractions from mouse brain.

    PubMed

    Kähne, Thilo; Richter, Sandra; Kolodziej, Angela; Smalla, Karl-Heinz; Pielot, Rainer; Engler, Alexander; Ohl, Frank W; Dieterich, Daniela C; Seidenbecher, Constanze; Tischmeyer, Wolfgang; Naumann, Michael; Gundelfinger, Eckart D

    2016-07-01

    Learning and memory processes are accompanied by rearrangements of synaptic protein networks. While various studies have demonstrated the regulation of individual synaptic proteins during these processes, much less is known about the complex regulation of synaptic proteomes. Recently, we reported that auditory discrimination learning in mice is associated with a relative down-regulation of proteins involved in the structural organization of synapses in various brain regions. Aiming at the identification of biological processes and signaling pathways involved in auditory memory formation, here, a label-free quantification approach was utilized to identify regulated synaptic junctional proteins and phosphoproteins in the auditory cortex, frontal cortex, hippocampus, and striatum of mice 24 h after the learning experiment. Twenty proteins, including postsynaptic scaffolds, actin-remodeling proteins, and RNA-binding proteins, were regulated in at least three brain regions pointing to common, cross-regional mechanisms. Most of the detected synaptic proteome changes were, however, restricted to individual brain regions. For example, several members of the Septin family of cytoskeletal proteins were up-regulated only in the hippocampus, while Septin-9 was down-regulated in the hippocampus, the frontal cortex, and the striatum. Meta analyses utilizing several databases were employed to identify underlying cellular functions and biological pathways. Data are available via ProteomeExchange with identifier PXD003089. How does the protein composition of synapses change in different brain areas upon auditory learning? We unravel discrete proteome changes in mouse auditory cortex, frontal cortex, hippocampus, and striatum functionally implicated in the learning process. We identify not only common but also area-specific biological pathways and cellular processes modulated 24 h after training, indicating individual contributions of the regions to memory processing. PMID

  9. Learning disability subtypes and the effects of auditory and visual priming on visual event-related potentials to words.

    PubMed

    Miles, J; Stelmack, R M

    1994-02-01

    Three learning-disability (LD) subtype groups and a normal control group of children were compared in their visual event-related potentials (ERPs) to primed and unprimed words. The LD subtypes were defined by deficient performance on tests of arithmetic (Group A), reading and spelling (Group RS), or both (Group RSA). The primed words were preceded by pictures or spoken words having a related meaning, while unprimed words were preceded by non-associated pictures or spoken words. For normal controls, N450 amplitude was greater to unprimed words than to words primed by pictures and spoken words. For Group A, N450 amplitude was reduced by spoken-word primes, but not by picture primes, an effect that demonstrates a deficit in processing visual-spatial information. For Group RS and Group RSA, neither picture nor spoken-word primes reduced N450 amplitude. These effects can be understood in terms of deficiencies in processing auditory-verbal information. Normal controls displayed a greater left- than right-hemispheric asymmetry in frontal N450 amplitude to unprimed words, an effect that is consistent with the association of skilled reading with hemispheric specialization. This asymmetry was absent in the ERPs of all the LD subtypes. The distinct ERP effects for the groups endorses the value of defining LD subtypes on the basis of patterns of deficits in arithmetic and reading and spelling. PMID:8150889

  10. Learning to match auditory and visual speech cues: social influences on acquisition of phonological categories.

    PubMed

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential looking paradigm, 44 German 6-month olds' ability to detect mismatches between concurrently presented auditory and visual native vowels was tested. Outcomes were related to mothers' speech style and interactive behavior assessed during free play with their infant, and to infant-specific factors assessed through a questionnaire. Results show that mothers' and infants' social behavior modulated infants' preference for matching audiovisual speech. Moreover, infants' audiovisual speech perception correlated with later vocabulary size, suggesting a lasting effect on language development. PMID:25403424

  11. Selective importance of the rat anterior thalamic nuclei for configural learning involving distal spatial cues.

    PubMed

    Dumont, Julie R; Amin, Eman; Aggleton, John P

    2014-01-01

    To test potential parallels between hippocampal and anterior thalamic function, rats with anterior thalamic lesions were trained on a series of biconditional learning tasks. The anterior thalamic lesions did not disrupt learning two biconditional associations in operant chambers where a specific auditory stimulus (tone or click) had a differential outcome depending on whether it was paired with a particular visual context (spot or checkered wall-paper) or a particular thermal context (warm or cool). Likewise, rats with anterior thalamic lesions successfully learnt a biconditional task when they were reinforced for digging in one of two distinct cups (containing either beads or shredded paper), depending on the particular appearance of the local context on which the cup was placed (one of two textured floors). In contrast, the same rats were severely impaired at learning the biconditional rule to select a specific cup when in a particular location within the test room. Place learning was then tested with a series of go/no-go discriminations. Rats with anterior thalamic nuclei lesions could learn to discriminate between two locations when they were approached from a constant direction. They could not, however, use this acquired location information to solve a subsequent spatial biconditional task where those same places dictated the correct choice of digging cup. Anterior thalamic lesions produced a selective, but severe, biconditional learning deficit when the task incorporated distal spatial cues. This deficit mirrors that seen in rats with hippocampal lesions, so extending potential interdependencies between the two sites. PMID:24215178

  12. Selective importance of the rat anterior thalamic nuclei for configural learning involving distal spatial cues

    PubMed Central

    Dumont, Julie R; Amin, Eman; Aggleton, John P

    2013-01-01

    To test potential parallels between hippocampal and anterior thalamic function, rats with anterior thalamic lesions were trained on a series of biconditional learning tasks. The anterior thalamic lesions did not disrupt learning two biconditional associations in operant chambers where a specific auditory stimulus (tone or click) had a differential outcome depending on whether it was paired with a particular visual context (spot or checkered wall-paper) or a particular thermal context (warm or cool). Likewise, rats with anterior thalamic lesions successfully learnt a biconditional task when they were reinforced for digging in one of two distinct cups (containing either beads or shredded paper), depending on the particular appearance of the local context on which the cup was placed (one of two textured floors). In contrast, the same rats were severely impaired at learning the biconditional rule to select a specific cup when in a particular location within the test room. Place learning was then tested with a series of go/no-go discriminations. Rats with anterior thalamic nuclei lesions could learn to discriminate between two locations when they were approached from a constant direction. They could not, however, use this acquired location information to solve a subsequent spatial biconditional task where those same places dictated the correct choice of digging cup. Anterior thalamic lesions produced a selective, but severe, biconditional learning deficit when the task incorporated distal spatial cues. This deficit mirrors that seen in rats with hippocampal lesions, so extending potential interdependencies between the two sites. PMID:24215178

  13. Jumpstarting auditory learning in children with cochlear implants through music experiences.

    PubMed

    Barton, Christine; Robbins, Amy McConkey

    2015-09-01

    Musical experiences are a valuable part of the lives of children with cochlear implants (CIs). In addition to the pleasure, relationships and emotional outlet provided by music, it serves to enhance or 'jumpstart' other auditory and cognitive skills that are critical for development and learning throughout the lifespan. Musicians have been shown to be 'better listeners' than non-musicians with regard to how they perceive and process sound. A heuristic model of music therapy is reviewed, including six modulating factors that may account for the auditory advantages demonstrated by those who participate in music therapy. The integral approach to music therapy is described along with the hybrid approach to pediatric language intervention. These approaches share the characteristics of placing high value on ecologically valid therapy experiences, i.e., engaging in 'real' music and 'real' communication. Music and language intervention techniques used by the authors are presented. It has been documented that children with CIs consistently have lower music perception scores than do their peers with normal hearing (NH). On the one hand, this finding matters a great deal because it provides parameters for setting reasonable expectations and highlights the work still required to improve signal processing with the devices so that they more accurately transmit music to CI listeners. On the other hand, the finding might not matter much if we assume that music, even in its less-than-optimal state, functions for CI children, as for NH children, as a developmental jumpstarter, a language-learning tool, a cognitive enricher, a motivator, and an attention enhancer. PMID:26561888

  14. Auditory Perceptual Learning in Adults with and without Age-Related Hearing Loss

    PubMed Central

    Karawani, Hanin; Bitan, Tali; Attias, Joseph; Banai, Karen

    2016-01-01

    Introduction : Speech recognition in adverse listening conditions becomes more difficult as we age, particularly for individuals with age-related hearing loss (ARHL). Whether these difficulties can be eased with training remains debated, because it is not clear whether the outcomes are sufficiently general to be of use outside of the training context. The aim of the current study was to compare training-induced learning and generalization between normal-hearing older adults and those with ARHL. Methods : Fifty-six listeners (60–72 y/o), 35 participants with ARHL, and 21 normal hearing adults participated in the study. The study design was a cross over design with three groups (immediate-training, delayed-training, and no-training group). Trained participants received 13 sessions of home-based auditory training over the course of 4 weeks. Three adverse listening conditions were targeted: (1) Speech-in-noise, (2) time compressed speech, and (3) competing speakers, and the outcomes of training were compared between normal and ARHL groups. Pre- and post-test sessions were completed by all participants. Outcome measures included tests on all of the trained conditions as well as on a series of untrained conditions designed to assess the transfer of learning to other speech and non-speech conditions. Results : Significant improvements on all trained conditions were observed in both ARHL and normal-hearing groups over the course of training. Normal hearing participants learned more than participants with ARHL in the speech-in-noise condition, but showed similar patterns of learning in the other conditions. Greater pre- to post-test changes were observed in trained than in untrained listeners on all trained conditions. In addition, the ability of trained listeners from the ARHL group to discriminate minimally different pseudowords in noise also improved with training. Conclusions : ARHL did not preclude auditory perceptual learning but there was little generalization to

  15. Spatially Distributed Instructions Improve Learning Outcomes and Efficiency

    ERIC Educational Resources Information Center

    Jang, Jooyoung; Schunn, Christian D.; Nokes, Timothy J.

    2011-01-01

    Learning requires applying limited working memory and attentional resources to intrinsic, germane, and extraneous aspects of the learning task. To reduce the especially undesirable extraneous load aspects of learning environments, cognitive load theorists suggest that spatially integrated learning materials should be used instead of spatially…

  16. Sensory Processing of Backward-Masking Signals in Children with Language-Learning Impairment as Assessed with the Auditory Brainstem Response.

    ERIC Educational Resources Information Center

    Marler, Jeffrey A.; Champlin, Craig A.

    2005-01-01

    The purpose of this study was to examine the possible contribution of sensory mechanisms to an auditory processing deficit shown by some children with language-learning impairment (LLI). Auditory brainstem responses (ABRs) were measured from 2 groups of school-aged (8-10 years) children. One group consisted of 10 children with LLI, and the other…

  17. Modulation of auditory spatial attention by visual emotional cues: differential effects of attentional engagement and disengagement for pleasant and unpleasant cues.

    PubMed

    Harrison, Neil R; Woodhouse, Rob

    2016-05-01

    Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue. PMID:26842012

  18. Learning Hierarchical Spectral-Spatial Features for Hyperspectral Image Classification.

    PubMed

    Zhou, Yicong; Wei, Yantao

    2016-07-01

    This paper proposes a spectral-spatial feature learning (SSFL) method to obtain robust features of hyperspectral images (HSIs). It combines the spectral feature learning and spatial feature learning in a hierarchical fashion. Stacking a set of SSFL units, a deep hierarchical model called the spectral-spatial networks (SSN) is further proposed for HSI classification. SSN can exploit both discriminative spectral and spatial information simultaneously. Specifically, SSN learns useful high-level features by alternating between spectral and spatial feature learning operations. Then, kernel-based extreme learning machine (KELM), a shallow neural network, is embedded in SSN to classify image pixels. Extensive experiments are performed on two benchmark HSI datasets to verify the effectiveness of SSN. Compared with state-of-the-art methods, SSN with a deep hierarchical architecture obtains higher classification accuracy in terms of the overall accuracy, average accuracy, and kappa ( κ ) coefficient of agreement, especially when the number of the training samples is small. PMID:26241988

  19. Robot navigation algorithms using learned spatial graphs

    SciTech Connect

    Iyengar, S.S.; Jorgensen, C.C.; Rao, S.V.N.; Weisbin, C.R.

    1985-01-01

    Finding optimal paths for robot navigation in known terrain has been studied for some time but, in many important situations, a robot would be required to navigate in completely new or partially explored terrain. We propose a method of robot navigation which requires no pre-learned model, makes maximal use of available information, records and synthesizes information from multiple journeys, and contains concepts of learning that allow for continuous transition from local to global path optimality. The model of the terrain consists of a spatial graph and a Voronoi diagram. Using acquired sensor data, polygonal boundaries containing perceived obstacles shrink to approximate the actual obstacles' surfaces, free space for transit is correspondingly enlarged, and additional nodes and edges are recorded based on path intersections and stop points. Navigation planning is gradually accelerated with experience since improved global map information minimizes the need for further sensor data acquisition. Our method currently assumes obstacle locations are unchanging, navigation can be successfully conducted using two-dimensional projections, and sensor information is precise.

  20. Spatial learning in men undergoing alcohol detoxification.

    PubMed

    Ceccanti, Mauro; Hamilton, Derek; Coriale, Giovanna; Carito, Valentina; Aloe, Luigi; Chaldakov, George; Romeo, Marina; Ceccanti, Marco; Iannitelli, Angela; Fiore, Marco

    2015-10-01

    Alcohol dependence is a major public health problem worldwide. Brain and behavioral disruptions including changes in cognitive abilities are common features of alcohol addiction. Thus, the present study was aimed to investigate spatial learning and memory in 29 alcoholic men undergoing alcohol detoxification by using a virtual Morris maze task. As age-matched controls we recruited 29 men among occasional drinkers without history of alcohol dependence and/or alcohol related diseases and with a negative blood alcohol level at the time of testing. We found that the responses to the virtual Morris maze are impaired in men undergoing alcohol detoxification. Notably they showed increased latencies in the first movement during the trials, increased latencies in retrieving the hidden platform and increased latencies in reaching the visible platform. These findings were associated with reduced swimming time in the target quadrant of the pool where the platform had been during the 4 hidden platform trials of the learning phase compared to controls. Such increasing latency responses may suggest motor control, attentional and motivational deficits due to alcohol detoxification. PMID:26143187

  1. Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.

    PubMed

    François, Clément; Schön, Daniele

    2014-02-01

    There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations. PMID:24035820

  2. Guidance of Spatial Attention by Incidental Learning and Endogenous Cuing

    ERIC Educational Resources Information Center

    Jiang, Yuhong V.; Swallow, Khena M.; Rosenbaum, Gail M.

    2013-01-01

    Our visual system is highly sensitive to regularities in the environment. Locations that were important in one's previous experience are often prioritized during search, even though observers may not be aware of the learning. In this study we characterized the guidance of spatial attention by incidental learning of a target's spatial probability,…

  3. Intrinsic frames of reference in haptic spatial learning.

    PubMed

    Yamamoto, Naohide; Philbeck, John W

    2013-11-01

    It has been proposed that spatial reference frames with which object locations are specified in memory are intrinsic to a to-be-remembered spatial layout (intrinsic reference theory). Although this theory has been supported by accumulating evidence, it has only been collected from paradigms in which the entire spatial layout was simultaneously visible to observers. The present study was designed to examine the generality of the theory by investigating whether the geometric structure of a spatial layout (bilateral symmetry) influences selection of spatial reference frames when object locations are sequentially learned through haptic exploration. In two experiments, participants learned the spatial layout solely by touch and performed judgments of relative direction among objects using their spatial memories. Results indicated that the geometric structure can provide a spatial cue for establishing reference frames as long as it is accentuated by explicit instructions (Experiment 1) or alignment with an egocentric orientation (Experiment 2). These results are entirely consistent with those from previous studies in which spatial information was encoded through simultaneous viewing of all object locations, suggesting that the intrinsic reference theory is not specific to a type of spatial memory acquired by the particular learning method but instead generalizes to spatial memories learned through a variety of encoding conditions. In particular, the present findings suggest that spatial memories that follow the intrinsic reference theory function equivalently regardless of the modality in which spatial information is encoded. PMID:24007919

  4. Auditory learning through active engagement with sound: biological impact of community music lessons in at-risk children.

    PubMed

    Kraus, Nina; Slater, Jessica; Thompson, Elaine C; Hornickel, Jane; Strait, Dana L; Nicol, Trent; White-Schwoch, Travis

    2014-01-01

    The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the

  5. Auditory learning through active engagement with sound: biological impact of community music lessons in at-risk children

    PubMed Central

    Kraus, Nina; Slater, Jessica; Thompson, Elaine C.; Hornickel, Jane; Strait, Dana L.; Nicol, Trent; White-Schwoch, Travis

    2014-01-01

    The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the

  6. Transcriptome changes associated with instructed learning in the barn owl auditory localization pathway.

    PubMed

    Swofford, Janet A; DeBello, William M

    2007-09-15

    Owls reared wearing prismatic spectacles learn to make adaptive orienting movements. This instructed learning depends on re-calibration of the midbrain auditory space map, which in turn involves the formation of new synapses. Here we investigated whether these processes are associated with differential gene expression, using longSAGE. Newly fledged owls were reared for 8-36 days with prism or control lenses at which time the extent of learning was quantified by electrophysiological mapping. Transciptome profiles were obtained from the inferior colliculus (IC), the major site of synaptic plasticity, and the optic tectum (OT), which provides an instructive signal that controls the direction and extent of plasticity. Twenty-two differentially expressed sequence tags were identified in IC and 36 in OT, out of more than 35,000 unique tags. Of these, only four were regulated in both structures. These results indicate that regulation of two largely independent gene clusters is associated with synaptic remodeling (in IC) and generation of the instructive signal (in OT). Real-time PCR data confirmed the changes for two transcripts, ubiquitin/polyubiquitin and tyrosine 3-monooxgenase/tryotophan 5-monooxygenase activation protein, theta subunit (YWHAQ; also referred to as 14-3-3 protein). Ubiquitin was downregulated in IC, consistent with a model in which protein degradation pathways act as an inhibitory constraint on synaptogenesis. YWHAQ was up-regulated in OT, indicating a role in the synthesis or delivery of instructive information. In total, our results provide a path towards unraveling molecular cascades that link naturalistic experience with synaptic remodeling and, ultimately, with the expression of learned behavior. PMID:17526003

  7. Elevated yolk progesterone moderates prenatal heart rate and postnatal auditory learning in bobwhite quail (Colinus virginianus).

    PubMed

    Herrington, Joshua A; Rodriguez, Yvette; Lickliter, Robert

    2016-09-01

    Previous studies have established that yolk hormones of maternal origin can influence physiology and behavior in birds. However, few studies have examined the effects of maternal gestagens, like progesterone, on chick behavior and physiology. We tested the effects of experimentally elevated egg yolk progesterone on embryonic heart rate and postnatal auditory learning in bobwhite quail hatchlings. Quail chicks were passively exposed to an individual maternal assembly call for 10 min/hr during the 24 hr following hatching. Preference for the familiarized call was tested at 48 hr following hatching in three experimental groups: chicks that received artificially elevated yolk progesterone (P) prior to incubation, vehicle-only controls (V), and non-manipulated controls (C). Resting heart rate of P, V, and C embryos were also measured on prenatal day 17. The resting heart rate of P embryos was significantly higher than both the V and C embryos. Chicks from the P group also showed an enhanced preference for the familiarized bobwhite maternal call when compared to chicks from the C and V groups. Our results indicate that elevated yolk progesterone in pre-incubated bobwhite quail eggs can influence arousal level in bobwhite embryos and postnatal perceptual learning in bobwhite neonates. PMID:27108924

  8. Prenatal complex rhythmic music sound stimulation facilitates postnatal spatial learning but transiently impairs memory in the domestic chick.

    PubMed

    Kauser, H; Roy, S; Pal, A; Sreenivas, V; Mathur, R; Wadhwa, S; Jain, S

    2011-01-01

    Early experience has a profound influence on brain development, and the modulation of prenatal perceptual learning by external environmental stimuli has been shown in birds, rodents and mammals. In the present study, the effect of prenatal complex rhythmic music sound stimulation on postnatal spatial learning, memory and isolation stress was observed. Auditory stimulation with either music or species-specific sounds or no stimulation (control) was provided to separate sets of fertilized eggs from day 10 of incubation. Following hatching, the chicks at age 24, 72 and 120 h were tested on a T-maze for spatial learning and the memory of the learnt task was assessed 24 h after training. In the posthatch chicks at all ages, the plasma corticosterone levels were estimated following 10 min of isolation. The chicks of all ages in the three groups took less (p < 0.001) time to navigate the maze over the three trials thereby showing an improvement with training. In both sound-stimulated groups, the total time taken to reach the target decreased significantly (p < 0.01) in comparison to the unstimulated control group, indicating the facilitation of spatial learning. However, this decline was more at 24 h than at later posthatch ages. When tested for memory after 24 h of training, only the music-stimulated chicks at posthatch age 24 h took a significantly longer (p < 0.001) time to traverse the maze, suggesting a temporary impairment in their retention of the learnt task. In both sound-stimulated groups at 24 h, the plasma corticosterone levels were significantly decreased (p < 0.001) and increased thereafter at 72 h (p < 0.001) and 120 h which may contribute to the differential response in spatial learning. Thus, prenatal auditory stimulation with either species-specific or complex rhythmic music sounds facilitates spatial learning, though the music stimulation transiently impairs postnatal memory. PMID:21212638

  9. Development of Critical Spatial Thinking through GIS Learning

    ERIC Educational Resources Information Center

    Kim, Minsung; Bednarz, Robert

    2013-01-01

    This study developed an interview-based critical spatial thinking oral test and used the test to investigate the effects of Geographic Information System (GIS) learning on three components of critical spatial thinking: evaluating data reliability, exercising spatial reasoning, and assessing problem-solving validity. Thirty-two students at a large…

  10. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    PubMed

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-01-01

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production. PMID:26278337