Sample records for auditory spatial learning

  1. Spatial learning while navigating with severely degraded viewing: The role of attention and mobility monitoring

    PubMed Central

    Rand, Kristina M.; Creem-Regehr, Sarah H.; Thompson, William B.

    2015-01-01

    The ability to navigate without getting lost is an important aspect of quality of life. In five studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low-vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2 participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared to degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared to degraded vision. With degraded vision, auditory task performance was better when guided compared to unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing. PMID:25706766

  2. Supramodal Enhancement of Auditory Perceptual and Cognitive Learning by Video Game Playing.

    PubMed

    Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R; Amitay, Sygal

    2017-01-01

    Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.

  3. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    PubMed Central

    Młynarski, Wiktor

    2014-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644

  4. Perceptual and academic patterns of learning-disabled/gifted students.

    PubMed

    Waldron, K A; Saphire, D G

    1992-04-01

    This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.

  5. Retrosplenial Cortex Is Required for the Retrieval of Remote Memory for Auditory Cues

    ERIC Educational Resources Information Center

    Todd, Travis P.; Mehlman, Max L.; Keene, Christopher S.; DeAngeli, Nicole E.; Bucci, David J.

    2016-01-01

    The retrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of…

  6. Sound localization by echolocating bats

    NASA Astrophysics Data System (ADS)

    Aytekin, Murat

    Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.

  7. Visual influences on auditory spatial learning

    PubMed Central

    King, Andrew J.

    2008-01-01

    The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967

  8. Plasticity of spatial hearing: behavioural effects of cortical inactivation

    PubMed Central

    Nodal, Fernando R; Bajo, Victoria M; King, Andrew J

    2012-01-01

    The contribution of auditory cortex to spatial information processing was explored behaviourally in adult ferrets by reversibly deactivating different cortical areas by subdural placement of a polymer that released the GABAA agonist muscimol over a period of weeks. The spatial extent and time course of cortical inactivation were determined electrophysiologically. Muscimol-Elvax was placed bilaterally over the anterior (AEG), middle (MEG) or posterior ectosylvian gyrus (PEG), so that different regions of the auditory cortex could be deactivated in different cases. Sound localization accuracy in the horizontal plane was assessed by measuring both the initial head orienting and approach-to-target responses made by the animals. Head orienting behaviour was unaffected by silencing any region of the auditory cortex, whereas the accuracy of approach-to-target responses to brief sounds (40 ms noise bursts) was reduced by muscimol-Elvax but not by drug-free implants. Modest but significant localization impairments were observed after deactivating the MEG, AEG or PEG, although the largest deficits were produced in animals in which the MEG, where the primary auditory fields are located, was silenced. We also examined experience-induced spatial plasticity by reversibly plugging one ear. In control animals, localization accuracy for both approach-to-target and head orienting responses was initially impaired by monaural occlusion, but recovered with training over the next few days. Deactivating any part of the auditory cortex resulted in less complete recovery than in controls, with the largest deficits observed after silencing the higher-level cortical areas in the AEG and PEG. Although suggesting that each region of auditory cortex contributes to spatial learning, differences in the localization deficits and degree of adaptation between groups imply a regional specialization in the processing of spatial information across the auditory cortex. PMID:22547635

  9. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    PubMed

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-10-01

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity.

    PubMed

    Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou

    2018-01-01

    Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.

  11. Evidence for multisensory spatial-to-motor transformations in aiming movements of children.

    PubMed

    King, Bradley R; Kagerer, Florian A; Contreras-Vidal, Jose L; Clark, Jane E

    2009-01-01

    The extant developmental literature investigating age-related differences in the execution of aiming movements has predominantly focused on visuomotor coordination, despite the fact that additional sensory modalities, such as audition and somatosensation, may contribute to motor planning, execution, and learning. The current study investigated the execution of aiming movements toward both visual and acoustic stimuli. In addition, we examined the interaction between visuomotor and auditory-motor coordination as 5- to 10-yr-old participants executed aiming movements to visual and acoustic stimuli before and after exposure to a visuomotor rotation. Children in all age groups demonstrated significant improvement in performance under the visuomotor perturbation, as indicated by decreased initial directional and root mean squared errors. Moreover, children in all age groups demonstrated significant visual aftereffects during the postexposure phase, suggesting a successful update of their spatial-to-motor transformations. Interestingly, these updated spatial-to-motor transformations also influenced auditory-motor performance, as indicated by distorted movement trajectories during the auditory postexposure phase. The distorted trajectories were present during auditory postexposure even though the auditory-motor relationship was not manipulated. Results suggest that by the age of 5 yr, children have developed a multisensory spatial-to-motor transformation for the execution of aiming movements toward both visual and acoustic targets.

  12. SeaTouch: A Haptic and Auditory Maritime Environment for Non Visual Cognitive Mapping of Blind Sailors

    NASA Astrophysics Data System (ADS)

    Simonnet, Mathieu; Jacobson, Dan; Vieilledent, Stephane; Tisseau, Jacques

    Navigating consists of coordinating egocentric and allocentric spatial frames of reference. Virtual environments have afforded researchers in the spatial community with tools to investigate the learning of space. The issue of the transfer between virtual and real situations is not trivial. A central question is the role of frames of reference in mediating spatial knowledge transfer to external surroundings, as is the effect of different sensory modalities accessed in simulated and real worlds. This challenges the capacity of blind people to use virtual reality to explore a scene without graphics. The present experiment involves a haptic and auditory maritime virtual environment. In triangulation tasks, we measure systematic errors and preliminary results show an ability to learn configurational knowledge and to navigate through it without vision. Subjects appeared to take advantage of getting lost in an egocentric “haptic” view in the virtual environment to improve performances in the real environment.

  13. The opponent channel population code of sound location is an efficient representation of natural binaural sounds.

    PubMed

    Młynarski, Wiktor

    2015-05-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.

  14. Auditory Spatial Attention Representations in the Human Cerebral Cortex

    PubMed Central

    Kong, Lingqiang; Michalka, Samantha W.; Rosen, Maya L.; Sheremata, Summer L.; Swisher, Jascha D.; Shinn-Cunningham, Barbara G.; Somers, David C.

    2014-01-01

    Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753

  15. Neurofeedback-Based Enhancement of Single-Trial Auditory Evoked Potentials: Treatment of Auditory Verbal Hallucinations in Schizophrenia.

    PubMed

    Rieger, Kathryn; Rarra, Marie-Helene; Diaz Hernandez, Laura; Hubl, Daniela; Koenig, Thomas

    2018-03-01

    Auditory verbal hallucinations depend on a broad neurobiological network ranging from the auditory system to language as well as memory-related processes. As part of this, the auditory N100 event-related potential (ERP) component is attenuated in patients with schizophrenia, with stronger attenuation occurring during auditory verbal hallucinations. Changes in the N100 component assumingly reflect disturbed responsiveness of the auditory system toward external stimuli in schizophrenia. With this premise, we investigated the therapeutic utility of neurofeedback training to modulate the auditory-evoked N100 component in patients with schizophrenia and associated auditory verbal hallucinations. Ten patients completed electroencephalography neurofeedback training for modulation of N100 (treatment condition) or another unrelated component, P200 (control condition). On a behavioral level, only the control group showed a tendency for symptom improvement in the Positive and Negative Syndrome Scale total score in a pre-/postcomparison ( t (4) = 2.71, P = .054); however, no significant differences were found in specific hallucination related symptoms ( t (7) = -0.53, P = .62). There was no significant overall effect of neurofeedback training on ERP components in our paradigm; however, we were able to identify different learning patterns, and found a correlation between learning and improvement in auditory verbal hallucination symptoms across training sessions ( r = 0.664, n = 9, P = .05). This effect results, with cautious interpretation due to the small sample size, primarily from the treatment group ( r = 0.97, n = 4, P = .03). In particular, a within-session learning parameter showed utility for predicting symptom improvement with neurofeedback training. In conclusion, patients with schizophrenia and associated auditory verbal hallucinations who exhibit a learning pattern more characterized by within-session aptitude may benefit from electroencephalography neurofeedback. Furthermore, independent of the training group, a significant spatial pre-post difference was found in the event-related component P200 ( P = .04).

  16. Fear Conditioning is Disrupted by Damage to the Postsubiculum

    PubMed Central

    Robinson, Siobhan; Bucci, David J.

    2011-01-01

    The hippocampus plays a central role in spatial and contextual learning and memory, however relatively little is known about the specific contributions of parahippocampal structures that interface with the hippocampus. The postsubiculum (PoSub) is reciprocally connected with a number of hippocampal, parahippocampal and subcortical structures that are involved in spatial learning and memory. In addition, behavioral data suggest that PoSub is needed for optimal performance during tests of spatial memory. Together, these data suggest that PoSub plays a prominent role in spatial navigation. Currently it is unknown whether the PoSub is needed for other forms of learning and memory that also require the formation of associations among multiple environmental stimuli. To address this gap in the literature we investigated the role of PoSub in Pavlovian fear conditioning. In Experiment 1 male rats received either lesions of PoSub or Sham surgery prior to training in a classical fear conditioning procedure. On the training day a tone was paired with foot shock three times. Conditioned fear to the training context was evaluated 24 hr later by placing rats back into the conditioning chamber without presenting any tones or shocks. Auditory fear was assessed on the third day by presenting the auditory stimulus in a novel environment (no shock). PoSub-lesioned rats exhibited impaired acquisition of the conditioned fear response as well as impaired expression of contextual and auditory fear conditioning. In Experiment 2, PoSub lesions were made 1 day after training to specifically assess the role of PoSub in fear memory. No deficits in the expression of contextual fear were observed, but freezing to the tone was significantly reduced in PoSub-lesioned rats compared to shams. Together, these results indicate that PoSub is necessary for normal acquisition of conditioned fear, and that PoSub contributes to the expression of auditory but not contextual fear memory. PMID:22076971

  17. Musical learning in children and adults with Williams syndrome.

    PubMed

    Lense, M; Dykens, E

    2013-09-01

    There is recent interest in using music making as an empirically supported intervention for various neurodevelopmental disorders due to music's engagement of perceptual-motor mapping processes. However, little is known about music learning in populations with developmental disabilities. Williams syndrome (WS) is a neurodevelopmental genetic disorder whose characteristic auditory strengths and visual-spatial weaknesses map onto the processes used to learn to play a musical instrument. We identified correlates of novel musical instrument learning in WS by teaching 46 children and adults (7-49 years) with WS to play the Appalachian dulcimer. Obtained dulcimer skill was associated with prior musical abilities (r = 0.634, P < 0.001) and visual-motor integration abilities (r = 0.487, P = 0.001), but not age, gender, IQ, handedness, auditory sensitivities or musical interest/emotionality. Use of auditory learning strategies, but not visual or instructional strategies, predicted greater dulcimer skill beyond individual musical and visual-motor integration abilities (β = 0.285, sr(2) = 0.06, P = 0.019). These findings map onto behavioural and emerging neural evidence for greater auditory-motor mapping processes in WS. Results suggest that explicit awareness of task-specific learning approaches is important when learning a new skill. Implications for using music with populations with syndrome-specific strengths and weakness will be discussed. © 2012 The Authors. Journal of Intellectual Disability Research © 2012 John Wiley & Sons Ltd, MENCAP & IASSID.

  18. Retrosplenial cortex is required for the retrieval of remote memory for auditory cues.

    PubMed

    Todd, Travis P; Mehlman, Max L; Keene, Christopher S; DeAngeli, Nicole E; Bucci, David J

    2016-06-01

    The restrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of the RSC to recently acquired auditory fear memories. Since neocortical regions have been implicated in the permanent storage of remote memories, we examined the contribution of the RSC to remotely acquired auditory fear memories. In Experiment 1, retrieval of a remotely acquired auditory fear memory was impaired when permanent lesions (either electrolytic or neurotoxic) were made several weeks after initial conditioning. In Experiment 2, using a chemogenetic approach, we observed impairments in the retrieval of remote memory for an auditory cue when the RSC was temporarily inactivated during testing. In Experiment 3, after injection of a retrograde tracer into the RSC, we observed labeled cells in primary and secondary auditory cortices, as well as the claustrum, indicating that the RSC receives direct projections from auditory regions. Overall our results indicate the RSC has a critical role in the retrieval of remotely acquired auditory fear memories, and we suggest this is related to the quality of the memory, with less precise memories being RSC dependent. © 2016 Todd et al.; Published by Cold Spring Harbor Laboratory Press.

  19. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  20. Long-Term Memory Biases Auditory Spatial Attention

    ERIC Educational Resources Information Center

    Zimmermann, Jacqueline F.; Moscovitch, Morris; Alain, Claude

    2017-01-01

    Long-term memory (LTM) has been shown to bias attention to a previously learned visual target location. Here, we examined whether memory-predicted spatial location can facilitate the detection of a faint pure tone target embedded in real world audio clips (e.g., soundtrack of a restaurant). During an initial familiarization task, participants…

  1. Auditory-motor learning influences auditory memory for music.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  2. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps.

    PubMed

    Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.

  3. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps

    PubMed Central

    Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory. PMID:29059237

  4. Prenatal complex rhythmic music sound stimulation facilitates postnatal spatial learning but transiently impairs memory in the domestic chick.

    PubMed

    Kauser, H; Roy, S; Pal, A; Sreenivas, V; Mathur, R; Wadhwa, S; Jain, S

    2011-01-01

    Early experience has a profound influence on brain development, and the modulation of prenatal perceptual learning by external environmental stimuli has been shown in birds, rodents and mammals. In the present study, the effect of prenatal complex rhythmic music sound stimulation on postnatal spatial learning, memory and isolation stress was observed. Auditory stimulation with either music or species-specific sounds or no stimulation (control) was provided to separate sets of fertilized eggs from day 10 of incubation. Following hatching, the chicks at age 24, 72 and 120 h were tested on a T-maze for spatial learning and the memory of the learnt task was assessed 24 h after training. In the posthatch chicks at all ages, the plasma corticosterone levels were estimated following 10 min of isolation. The chicks of all ages in the three groups took less (p < 0.001) time to navigate the maze over the three trials thereby showing an improvement with training. In both sound-stimulated groups, the total time taken to reach the target decreased significantly (p < 0.01) in comparison to the unstimulated control group, indicating the facilitation of spatial learning. However, this decline was more at 24 h than at later posthatch ages. When tested for memory after 24 h of training, only the music-stimulated chicks at posthatch age 24 h took a significantly longer (p < 0.001) time to traverse the maze, suggesting a temporary impairment in their retention of the learnt task. In both sound-stimulated groups at 24 h, the plasma corticosterone levels were significantly decreased (p < 0.001) and increased thereafter at 72 h (p < 0.001) and 120 h which may contribute to the differential response in spatial learning. Thus, prenatal auditory stimulation with either species-specific or complex rhythmic music sounds facilitates spatial learning, though the music stimulation transiently impairs postnatal memory. 2011 S. Karger AG, Basel.

  5. Auditory spatial processing in Alzheimer’s disease

    PubMed Central

    Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer’s disease syndromic spectrum. PMID:25468732

  6. Action video games improve reading abilities and visual-to-auditory attentional shifting in English-speaking children with dyslexia.

    PubMed

    Franceschini, Sandro; Trevisan, Piergiorgio; Ronconi, Luca; Bertoni, Sara; Colmar, Susan; Double, Kit; Facoetti, Andrea; Gori, Simone

    2017-07-19

    Dyslexia is characterized by difficulties in learning to read and there is some evidence that action video games (AVG), without any direct phonological or orthographic stimulation, improve reading efficiency in Italian children with dyslexia. However, the cognitive mechanism underlying this improvement and the extent to which the benefits of AVG training would generalize to deep English orthography, remain two critical questions. During reading acquisition, children have to integrate written letters with speech sounds, rapidly shifting their attention from visual to auditory modality. In our study, we tested reading skills and phonological working memory, visuo-spatial attention, auditory, visual and audio-visual stimuli localization, and cross-sensory attentional shifting in two matched groups of English-speaking children with dyslexia before and after they played AVG or non-action video games. The speed of words recognition and phonological decoding increased after playing AVG, but not non-action video games. Furthermore, focused visuo-spatial attention and visual-to-auditory attentional shifting also improved only after AVG training. This unconventional reading remediation program also increased phonological short-term memory and phoneme blending skills. Our report shows that an enhancement of visuo-spatial attention and phonological working memory, and an acceleration of visual-to-auditory attentional shifting can directly translate into better reading in English-speaking children with dyslexia.

  7. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability

    PubMed Central

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-01-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  8. Long-term memory biases auditory spatial attention.

    PubMed

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2017-10-01

    Long-term memory (LTM) has been shown to bias attention to a previously learned visual target location. Here, we examined whether memory-predicted spatial location can facilitate the detection of a faint pure tone target embedded in real world audio clips (e.g., soundtrack of a restaurant). During an initial familiarization task, participants heard audio clips, some of which included a lateralized target (p = 50%). On each trial participants indicated whether the target was presented from the left, right, or was absent. Following a 1 hr retention interval, participants were presented with the same audio clips, which now all included a target. In Experiment 1, participants showed memory-based gains in response time and d'. Experiment 2 showed that temporal expectations modulate attention, with greater memory-guided attention effects on performance when temporal context was reinstated from learning (i.e., when timing of the target within audio clips was not changed from initially learned timing). Experiment 3 showed that while conscious recall of target locations was modulated by exposure to target-context associations during learning (i.e., better recall with higher number of learning blocks), the influence of LTM associations on spatial attention was not reduced (i.e., number of learning blocks did not affect memory-guided attention). Both Experiments 2 and 3 showed gains in performance related to target-context associations, even for associations that were not explicitly remembered. Together, these findings indicate that memory for audio clips is acquired quickly and is surprisingly robust; both implicit and explicit LTM for the location of a faint target tone modulated auditory spatial attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Noise-induced tinnitus using individualized gap detection analysis and its relationship with hyperacusis, anxiety, and spatial cognition.

    PubMed

    Pace, Edward; Zhang, Jinsheng

    2013-01-01

    Tinnitus has a complex etiology that involves auditory and non-auditory factors and may be accompanied by hyperacusis, anxiety and cognitive changes. Thus far, investigations of the interrelationship between tinnitus and auditory and non-auditory impairment have yielded conflicting results. To further address this issue, we noise exposed rats and assessed them for tinnitus using a gap detection behavioral paradigm combined with statistically-driven analysis to diagnose tinnitus in individual rats. We also tested rats for hearing detection, responsivity, and loss using prepulse inhibition and auditory brainstem response, and for spatial cognition and anxiety using Morris water maze and elevated plus maze. We found that our tinnitus diagnosis method reliably separated noise-exposed rats into tinnitus((+)) and tinnitus((-)) groups and detected no evidence of tinnitus in tinnitus((-)) and control rats. In addition, the tinnitus((+)) group demonstrated enhanced startle amplitude, indicating hyperacusis-like behavior. Despite these results, neither tinnitus, hyperacusis nor hearing loss yielded any significant effects on spatial learning and memory or anxiety, though a majority of rats with the highest anxiety levels had tinnitus. These findings showed that we were able to develop a clinically relevant tinnitus((+)) group and that our diagnosis method is sound. At the same time, like clinical studies, we found that tinnitus does not always result in cognitive-emotional dysfunction, although tinnitus may predispose subjects to certain impairment like anxiety. Other behavioral assessments may be needed to further define the relationship between tinnitus and anxiety, cognitive deficits, and other impairments.

  10. Auditory and motor imagery modulate learning in music performance

    PubMed Central

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of auditory interference. Motor imagery aided pitch accuracy overall when interference conditions were manipulated at encoding (Experiment 1) but not at retrieval (Experiment 2). Thus, skilled performers' imagery abilities had distinct influences on encoding and retrieval of musical sequences. PMID:23847495

  11. Multiple brain networks underpinning word learning from fluent speech revealed by independent component analysis.

    PubMed

    López-Barroso, Diana; Ripollés, Pablo; Marco-Pallarés, Josep; Mohammadi, Bahram; Münte, Thomas F; Bachoud-Lévi, Anne-Catherine; Rodriguez-Fornells, Antoni; de Diego-Balaguer, Ruth

    2015-04-15

    Although neuroimaging studies using standard subtraction-based analysis from functional magnetic resonance imaging (fMRI) have suggested that frontal and temporal regions are involved in word learning from fluent speech, the possible contribution of different brain networks during this type of learning is still largely unknown. Indeed, univariate fMRI analyses cannot identify the full extent of distributed networks that are engaged by a complex task such as word learning. Here we used Independent Component Analysis (ICA) to characterize the different brain networks subserving word learning from an artificial language speech stream. Results were replicated in a second cohort of participants with a different linguistic background. Four spatially independent networks were associated with the task in both cohorts: (i) a dorsal Auditory-Premotor network; (ii) a dorsal Sensory-Motor network; (iii) a dorsal Fronto-Parietal network; and (iv) a ventral Fronto-Temporal network. The level of engagement of these networks varied through the learning period with only the dorsal Auditory-Premotor network being engaged across all blocks. In addition, the connectivity strength of this network in the second block of the learning phase correlated with the individual variability in word learning performance. These findings suggest that: (i) word learning relies on segregated connectivity patterns involving dorsal and ventral networks; and (ii) specifically, the dorsal auditory-premotor network connectivity strength is directly correlated with word learning performance. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Auditory and visual spatial impression: Recent studies of three auditoria

    NASA Astrophysics Data System (ADS)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  13. The plastic ear and perceptual relearning in auditory spatial perception

    PubMed Central

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497

  14. Auditory spatial processing in the human cortex.

    PubMed

    Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C

    2012-12-01

    The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.

  15. Auditory cortical activity after intracortical microstimulation and its role for sensory processing and learning.

    PubMed

    Deliano, Matthias; Scheich, Henning; Ohl, Frank W

    2009-12-16

    Several studies have shown that animals can learn to make specific use of intracortical microstimulation (ICMS) of sensory cortex within behavioral tasks. Here, we investigate how the focal, artificial activation by ICMS leads to a meaningful, behaviorally interpretable signal. In natural learning, this involves large-scale activity patterns in widespread brain-networks. We therefore trained gerbils to discriminate closely neighboring ICMS sites within primary auditory cortex producing evoked responses largely overlapping in space. In parallel, during training, we recorded electrocorticograms (ECoGs) at high spatial resolution. Applying a multivariate classification procedure, we identified late spatial patterns that emerged with discrimination learning from the ongoing poststimulus ECoG. These patterns contained information about the preceding conditioned stimulus, and were associated with a subsequent correct behavioral response by the animal. Thereby, relevant pattern information was mainly carried by neuron populations outside the range of the lateral spatial spread of ICMS-evoked cortical activation (approximately 1.2 mm). This demonstrates that the stimulated cortical area not only encoded information about the stimulation sites by its focal, stimulus-driven activation, but also provided meaningful signals in its ongoing activity related to the interpretation of ICMS learned by the animal. This involved the stimulated area as a whole, and apparently required large-scale integration in the brain. However, ICMS locally interfered with the ongoing cortical dynamics by suppressing pattern formation near the stimulation sites. The interaction between ICMS and ongoing cortical activity has several implications for the design of ICMS protocols and cortical neuroprostheses, since the meaningful interpretation of ICMS depends on this interaction.

  16. Auditory motion-specific mechanisms in the primate brain

    PubMed Central

    Baumann, Simon; Dheerendra, Pradeep; Joly, Olivier; Hunter, David; Balezeau, Fabien; Sun, Li; Rees, Adrian; Petkov, Christopher I.; Thiele, Alexander; Griffiths, Timothy D.

    2017-01-01

    This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream. PMID:28472038

  17. Sensory Substitution: The Spatial Updating of Auditory Scenes "Mimics" the Spatial Updating of Visual Scenes.

    PubMed

    Pasqualotto, Achille; Esenkaya, Tayfun

    2016-01-01

    Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or "soundscapes". Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD).

  18. Noise-Induced Tinnitus Using Individualized Gap Detection Analysis and Its Relationship with Hyperacusis, Anxiety, and Spatial Cognition

    PubMed Central

    Pace, Edward; Zhang, Jinsheng

    2013-01-01

    Tinnitus has a complex etiology that involves auditory and non-auditory factors and may be accompanied by hyperacusis, anxiety and cognitive changes. Thus far, investigations of the interrelationship between tinnitus and auditory and non-auditory impairment have yielded conflicting results. To further address this issue, we noise exposed rats and assessed them for tinnitus using a gap detection behavioral paradigm combined with statistically-driven analysis to diagnose tinnitus in individual rats. We also tested rats for hearing detection, responsivity, and loss using prepulse inhibition and auditory brainstem response, and for spatial cognition and anxiety using Morris water maze and elevated plus maze. We found that our tinnitus diagnosis method reliably separated noise-exposed rats into tinnitus(+) and tinnitus(−) groups and detected no evidence of tinnitus in tinnitus(−) and control rats. In addition, the tinnitus(+) group demonstrated enhanced startle amplitude, indicating hyperacusis-like behavior. Despite these results, neither tinnitus, hyperacusis nor hearing loss yielded any significant effects on spatial learning and memory or anxiety, though a majority of rats with the highest anxiety levels had tinnitus. These findings showed that we were able to develop a clinically relevant tinnitus(+) group and that our diagnosis method is sound. At the same time, like clinical studies, we found that tinnitus does not always result in cognitive-emotional dysfunction, although tinnitus may predispose subjects to certain impairment like anxiety. Other behavioral assessments may be needed to further define the relationship between tinnitus and anxiety, cognitive deficits, and other impairments. PMID:24069375

  19. Cognitive Load Theory and Music Instruction

    ERIC Educational Resources Information Center

    Owens, Paul; Sweller, John

    2008-01-01

    In two experiments, the principles of cognitive load theory were applied to the design of alternatives to conventional music instruction hypothesised to facilitate learning. Experiment 1 demonstrated that spatial integration of visual text and musical notation, and dual-modal delivery of auditory text and musical notation, were superior to the…

  20. Covert Auditory Spatial Orienting: An Evaluation of the Spatial Relevance Hypothesis

    ERIC Educational Resources Information Center

    Roberts, Katherine L.; Summerfield, A. Quentin; Hall, Deborah A.

    2009-01-01

    The spatial relevance hypothesis (J. J. McDonald & L. M. Ward, 1999) proposes that covert auditory spatial orienting can only be beneficial to auditory processing when task stimuli are encoded spatially. We present a series of experiments that evaluate 2 key aspects of the hypothesis: (a) that "reflexive activation of location-sensitive neurons is…

  1. Verbal short-term memory and vocabulary learning in polyglots.

    PubMed

    Papagno, C; Vallar, G

    1995-02-01

    Polyglot and non-polyglot Italian subjects were given tests assessing verbal (phonological) and visuo-spatial short-term and long-term memory, general intelligence, and vocabulary knowledge in their native language. Polyglots had a superior level of performance in verbal short-term memory tasks (auditory digit span and nonword repetition) and in a paired-associate learning test, which assessed the subjects' ability to acquire new (Russian) words. By contrast, the two groups had comparable performance levels in tasks assessing general intelligence, visuo-spatial short-term memory and learning, and paired-associate learning of Italian words. These findings, which are in line with neuropsychological and developmental evidence, as well as with data from normal subjects, suggest a close relationship between the capacity of phonological memory and the acquisition of foreign languages.

  2. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing

    PubMed Central

    Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias

    2018-01-01

    Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637

  3. Modulation of auditory stimulus processing by visual spatial or temporal cue: an event-related potentials study.

    PubMed

    Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong

    2013-10-11

    Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Prenatal Loud Music and Noise: Differential Impact on Physiological Arousal, Hippocampal Synaptogenesis and Spatial Behavior in One Day-Old Chicks

    PubMed Central

    Sanyal, Tania; Kumar, Vivek; Nag, Tapas Chandra; Jain, Suman; Sreenivas, Vishnu; Wadhwa, Shashi

    2013-01-01

    Prenatal auditory stimulation in chicks with species-specific sound and music at 65 dB facilitates spatial orientation and learning and is associated with significant morphological and biochemical changes in the hippocampus and brainstem auditory nuclei. Increased noradrenaline level due to physiological arousal is suggested as a possible mediator for the observed beneficial effects following patterned and rhythmic sound exposure. However, studies regarding the effects of prenatal high decibel sound (110 dB; music and noise) exposure on the plasma noradrenaline level, synaptic protein expression in the hippocampus and spatial behavior of neonatal chicks remained unexplored. Here, we report that high decibel music stimulation moderately increases plasma noradrenaline level and positively modulates spatial orientation, learning and memory of one day-old chicks. In contrast, noise at the same sound pressure level results in excessive increase of plasma noradrenaline level and impairs the spatial behavior. Further, to assess the changes at the molecular level, we have quantified the expression of functional synapse markers: synaptophysin and PSD-95 in the hippocampus. Compared to the controls, both proteins show significantly increased expressions in the music stimulated group but decrease in expressions in the noise group. We propose that the differential increase of plasma noradrenaline level and altered expression of synaptic proteins in the hippocampus are responsible for the observed behavioral consequences following prenatal 110 dB music and noise stimulation. PMID:23861759

  5. Differential occipital responses in early- and late-blind individuals during a sound-source discrimination task.

    PubMed

    Voss, Patrice; Gougoux, Frederic; Zatorre, Robert J; Lassonde, Maryse; Lepore, Franco

    2008-04-01

    Blind individuals do not necessarily receive more auditory stimulation than sighted individuals. However, to interact effectively with their environment, they have to rely on non-visual cues (in particular auditory) to a greater extent. Often benefiting from cerebral reorganization, they not only learn to rely more on such cues but also may process them better and, as a result, demonstrate exceptional abilities in auditory spatial tasks. Here we examine the effects of blindness on brain activity, using positron emission tomography (PET), during a sound-source discrimination task (SSDT) in both early- and late-onset blind individuals. This should not only provide an answer to the question of whether the blind manifest changes in brain activity but also allow a direct comparison of the two subgroups performing an auditory spatial task. The task was presented under two listening conditions: one binaural and one monaural. The binaural task did not show any significant behavioural differences between groups, but it demonstrated striate and extrastriate activation in the early-blind groups. A subgroup of early-blind individuals, on the other hand, performed significantly better than all the other groups during the monaural task, and these enhanced skills were correlated with elevated activity within the left dorsal extrastriate cortex. Surprisingly, activation of the right ventral visual pathway, which was significantly activated in the late-blind individuals during the monaural task, was negatively correlated with performance. This suggests the possibility that not all cross-modal plasticity is beneficial. Overall, our results not only support previous findings showing that occipital cortex of early-blind individuals is functionally engaged in spatial auditory processing but also shed light on the impact the age of onset of blindness can have on the ensuing cross-modal plasticity.

  6. Sensory Substitution: The Spatial Updating of Auditory Scenes “Mimics” the Spatial Updating of Visual Scenes

    PubMed Central

    Pasqualotto, Achille; Esenkaya, Tayfun

    2016-01-01

    Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD). PMID:27148000

  7. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    PubMed

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  8. The Role of Auditory Cues in the Spatial Knowledge of Blind Individuals

    ERIC Educational Resources Information Center

    Papadopoulos, Konstantinos; Papadimitriou, Kimon; Koutsoklenis, Athanasios

    2012-01-01

    The study presented here sought to explore the role of auditory cues in the spatial knowledge of blind individuals by examining the relation between the perceived auditory cues and the landscape of a given area and by investigating how blind individuals use auditory cues to create cognitive maps. The findings reveal that several auditory cues…

  9. Listening to music primes space: pianists, but not novices, simulate heard actions.

    PubMed

    Taylor, J Eric T; Witt, Jessica K

    2015-03-01

    Musicians sometimes report twitching in their fingers or hands while listening to music. This anecdote could be indicative of a tendency for auditory-motor co-representation in musicians. Here, we describe two studies showing that pianists (Experiment 1), but not novices (Experiment 2) automatically generate spatial representations that correspond to learned musical actions while listening to music. Participants made one-handed movements to the left or right from a central location in response to visual stimuli while listening to task-irrelevant auditory stimuli, which were scales played on a piano. These task-irrelevant scales were either ascending (compatible with rightward movements) or descending (compatible with leftward movements). Pianists were faster to respond when the scale direction was compatible with the direction of response movement, whereas novices' movements were unaffected by the scale. These results are in agreement with existing research on action-effect coupling in musicians, which draw heavily on common coding theory. In addition, these results show how intricate auditory stimuli (ascending or descending scales) evoke coarse, domain-general spatial representations.

  10. Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.

    PubMed

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne

    2016-12-01

    It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    PubMed

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre-speech modulation is not directly related to limited auditory-motor adaptation; and in AWS, DAF paradoxically tends to normalize their otherwise limited pre-speech auditory modulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Auditory Learning. Dimensions in Early Learning Series.

    ERIC Educational Resources Information Center

    Zigmond, Naomi K.; Cicci, Regina

    The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…

  13. Bilateral cochlear implants in children: Effects of auditory experience and deprivation on auditory perception

    PubMed Central

    Litovsky, Ruth Y.; Gordon, Karen

    2017-01-01

    Spatial hearing skills are essential for children as they grow, learn and play. They provide critical cues for determining the locations of sources in the environment, and enable segregation of important sources, such as speech, from background maskers or interferers. Spatial hearing depends on availability of monaural cues and binaural cues. The latter result from integration of inputs arriving at the two ears from sounds that vary in location. The binaural system has exquisite mechanisms for capturing differences between the ears in both time of arrival and intensity. The major cues that are thus referred to as being vital for binaural hearing are: interaural differences in time (ITDs) and interaural differences in levels (ILDs). In children with normal hearing (NH), spatial hearing abilities are fairly well developed by age 4–5 years. In contrast, children who are deaf and hear through cochlear implants (CIs) do not have an opportunity to experience normal, binaural acoustic hearing early in life. These children may function by having to utilize auditory cues that are degraded with regard to numerous stimulus features. In recent years there has been a notable increase in the number of children receiving bilateral CIs, and evidence suggests that while having two CIs helps them function better than when listening through a single CI, they generally perform worse than their NH peers. This paper reviews some of the recent work on bilaterally implanted children. The focus is on measures of spatial hearing, including sound localization, release from masking for speech understanding in noise and binaural sensitivity using research processors. Data from behavioral and electrophysiological studies are included, with a focus on the recent work of the authors and their collaborators. The effects of auditory plasticity and deprivation on the emergence of binaural and spatial hearing are discussed along with evidence for reorganized processing from both behavioral and electrophysiological studies. The consequences of both unilateral and bilateral auditory deprivation during development suggest that the relevant set of issues is highly complex with regard to successes and the limitations experienced by children receiving bilateral cochlear implants. PMID:26828740

  14. Comparison of Congruence Judgment and Auditory Localization Tasks for Assessing the Spatial Limits of Visual Capture

    PubMed Central

    Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.

    2016-01-01

    Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630

  15. Testing the dual-pathway model for auditory processing in human cortex.

    PubMed

    Zündorf, Ida C; Lewald, Jörg; Karnath, Hans-Otto

    2016-01-01

    Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Non-visual spatial tasks reveal increased interactions with stance postural control.

    PubMed

    Woollacott, Marjorie; Vander Velde, Timothy

    2008-05-07

    The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.

  17. Outline for Remediation of Problem Areas for Children with Learning Disabilities. Revised. = Bosquejo para la Correccion de Areas Problematicas para Ninos con Impedimientos del Aprendizaje.

    ERIC Educational Resources Information Center

    Bornstein, Joan L.

    The booklet outlines ways to help children with learning disabilities in specific subject areas. Characteristic behavior and remedial exercises are listed for seven areas of auditory problems: auditory reception, auditory association, auditory discrimination, auditory figure ground, auditory closure and sound blending, auditory memory, and grammar…

  18. Early Visual Deprivation Severely Compromises the Auditory Sense of Space in Congenitally Blind Children

    ERIC Educational Resources Information Center

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-01-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind…

  19. Proceedings of the Lake Wilderness Attention Conference Held at Seattle Washington, 22-24 September 1980.

    DTIC Science & Technology

    1981-07-10

    Pohlmann, L. D. Some models of observer behavior in two-channel auditory signal detection. Perception and Psychophy- sics, 1973, 14, 101-109. Spelke...spatial), and processing modalities ( auditory versus visual input, vocal versus manual response). If validated, this configuration has both theoretical...conclusion that auditory and visual processes will compete, as will spatial and verbal (albeit to a lesser extent than auditory - auditory , visual-visual

  20. Sex differences in behavioral outcomes following temperature modulation during induced neonatal hypoxic ischemic injury in rats.

    PubMed

    Smith, Amanda L; Garbus, Haley; Rosenkrantz, Ted S; Fitch, Roslyn Holly

    2015-05-22

    Neonatal hypoxia ischemia (HI; reduced oxygen and/or blood flow to the brain) can cause various degrees of tissue damage, as well as subsequent cognitive/behavioral deficits such as motor, learning/memory, and auditory impairments. These outcomes frequently result from cardiovascular and/or respiratory events observed in premature infants. Data suggests that there is a sex difference in HI outcome, with males being more adversely affected relative to comparably injured females. Brain/body temperature may play a role in modulating the severity of an HI insult, with hypothermia during an insult yielding more favorable anatomical and behavioral outcomes. The current study utilized a postnatal day (P) 7 rodent model of HI injury to assess the effect of temperature modulation during injury in each sex. We hypothesized that female P7 rats would benefit more from lowered body temperatures as compared to male P7 rats. We assessed all subjects on rota-rod, auditory discrimination, and spatial/non-spatial maze tasks. Our results revealed a significant benefit of temperature reduction in HI females as measured by most of the employed behavioral tasks. However, HI males benefitted from temperature reduction as measured on auditory and non-spatial tasks. Our data suggest that temperature reduction protects both sexes from the deleterious effects of HI injury, but task and sex specific patterns of relative efficacy are seen.

  1. Medial Auditory Thalamus Is Necessary for Acquisition and Retention of Eyeblink Conditioning to Cochlear Nucleus Stimulation

    ERIC Educational Resources Information Center

    Halverson, Hunter E.; Poremba, Amy; Freeman, John H.

    2015-01-01

    Associative learning tasks commonly involve an auditory stimulus, which must be projected through the auditory system to the sites of memory induction for learning to occur. The cochlear nucleus (CN) projection to the pontine nuclei has been posited as the necessary auditory pathway for cerebellar learning, including eyeblink conditioning.…

  2. Cognitive Control Structures in the Imitation Learning of Spatial Sequences and Rhythms-An fMRI Study.

    PubMed

    Sakreida, Katrin; Higuchi, Satomi; Di Dio, Cinzia; Ziessler, Michael; Turgeon, Martine; Roberts, Neil; Vogt, Stefan

    2018-03-01

    Imitation learning involves the acquisition of novel motor patterns based on action observation (AO). We used event-related functional magnetic resonance imaging to study the imitation learning of spatial sequences and rhythms during AO, motor imagery (MI), and imitative execution in nonmusicians and musicians. While both tasks engaged the fronto-parietal mirror circuit, the spatial sequence task recruited posterior parietal and dorsal premotor regions more strongly. The rhythm task involved an additional network for auditory working memory. This partial dissociation supports the concept of task-specific mirror mechanisms. Two regions of cognitive control were identified: 1) dorsolateral prefrontal cortex (DLPFC) was found to be more strongly activated during MI of novel spatial sequences, which allowed us to extend the 2-level model of imitation learning by Buccino et al. (2004) to spatial sequences. 2) During imitative execution of both tasks, the posterior medial frontal cortex was robustly activated, along with the DLPFC, which suggests that both regions are involved in the cognitive control of imitation learning. The musicians' selective behavioral advantage for rhythm imitation was reflected cortically in enhanced sensory-motor processing during AO and by the absence of practice-related activation differences in DLPFC during rhythm execution. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. The medial prefrontal cortex and memory of cue location in the rat.

    PubMed

    Rawson, Tim; O'Kane, Michael; Talk, Andrew

    2010-01-01

    We developed a single-trial cue-location memory task in which rats experienced an auditory cue while exploring an environment. They then recalled and avoided the sound origination point after the cue was paired with shock in a separate context. Subjects with medial prefrontal cortical (mPFC) lesions made no such avoidance response, but both lesioned and control subjects avoided the cue itself when presented at test. A follow up assessment revealed no spatial learning impairment in either group. These findings suggest that the rodent mPFC is required for incidental learning or recollection of the location at which a discrete cue occurred, but is not required for cue recognition or for allocentric spatial memory. Copyright 2009 Elsevier Inc. All rights reserved.

  4. Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities

    ERIC Educational Resources Information Center

    Norrix, Linda W.; Plante, Elena; Vance, Rebecca

    2006-01-01

    Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…

  5. Modality-specificity of Selective Attention Networks.

    PubMed

    Stewart, Hannah J; Amitay, Sygal

    2015-01-01

    To establish the modality specificity and generality of selective attention networks. Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled "general attention." The third component was labeled "auditory attention," as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as "spatial orienting" and "spatial conflict," respectively-they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task-all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific.

  6. Is the Role of External Feedback in Auditory Skill Learning Age Dependent?

    PubMed

    Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat

    2017-12-20

    The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for frequency) task, with external feedback (EF) provided for half of them. Data supported the following findings: (a) Children learned the difference limen for frequency task only when EF was provided. (b) The ability of the children to benefit from EF was associated with better cognitive skills. (c) Adults showed significant learning whether EF was provided or not. (d) In children, within-session learning following training was dependent on the provision of feedback, whereas between-sessions learning occurred irrespective of feedback. EF was found beneficial for auditory skill learning of 7-9-year-old children but not for young adults. The data support the supervised Hebbian model for auditory skill learning, suggesting combined bottom-up internal neural feedback controlled by top-down monitoring. In the case of immature executive functions, EF enhanced auditory skill learning. This study has implications for the design of training protocols in the auditory modality for different age groups, as well as for special populations.

  7. Neural preservation underlies speech improvement from auditory deprivation in young cochlear implant recipients.

    PubMed

    Feng, Gangyi; Ingvalson, Erin M; Grieco-Calub, Tina M; Roberts, Megan Y; Ryan, Maura E; Birmingham, Patrick; Burrowes, Delilah; Young, Nancy M; Wong, Patrick C M

    2018-01-30

    Although cochlear implantation enables some children to attain age-appropriate speech and language development, communicative delays persist in others, and outcomes are quite variable and difficult to predict, even for children implanted early in life. To understand the neurobiological basis of this variability, we used presurgical neural morphological data obtained from MRI of individual pediatric cochlear implant (CI) candidates implanted younger than 3.5 years to predict variability of their speech-perception improvement after surgery. We first compared neuroanatomical density and spatial pattern similarity of CI candidates to that of age-matched children with normal hearing, which allowed us to detail neuroanatomical networks that were either affected or unaffected by auditory deprivation. This information enables us to build machine-learning models to predict the individual children's speech development following CI. We found that regions of the brain that were unaffected by auditory deprivation, in particular the auditory association and cognitive brain regions, produced the highest accuracy, specificity, and sensitivity in patient classification and the most precise prediction results. These findings suggest that brain areas unaffected by auditory deprivation are critical to developing closer to typical speech outcomes. Moreover, the findings suggest that determination of the type of neural reorganization caused by auditory deprivation before implantation is valuable for predicting post-CI language outcomes for young children.

  8. Cingulate neglect in humans: disruption of contralesional reward learning in right brain damage.

    PubMed

    Lecce, Francesca; Rotondaro, Francesca; Bonnì, Sonia; Carlesimo, Augusto; Thiebaut de Schotten, Michel; Tomaiuolo, Francesco; Doricchi, Fabrizio

    2015-01-01

    Motivational valence plays a key role in orienting spatial attention. Nonetheless, clinical documentation and understanding of motivationally based deficits of spatial orienting in the human is limited. Here in a series of one group-study and two single-case studies, we have examined right brain damaged patients (RBD) with and without left spatial neglect in a spatial reward-learning task, in which the motivational valence of the left contralesional and the right ipsilesional space was contrasted. In each trial two visual boxes were presented, one to the left and one to the right of central fixation. In one session monetary rewards were released more frequently in the box on the left side (75% of trials) whereas in another session they were released more frequently on the right side. In each trial patients were required to: 1) point to each one of the two boxes; 2) choose one of the boxes for obtaining monetary reward; 3) report explicitly the position of reward and whether this position matched or not the original choice. Despite defective spontaneous allocation of attention toward the contralesional space, RBD patients with left spatial neglect showed preserved contralesional reward learning, i.e., comparable to ipsilesional learning and to reward learning displayed by patients without neglect. A notable exception in the group of neglect patients was L.R., who showed no sign of contralesional reward learning in a series of 120 consecutive trials despite being able of reaching learning criterion in only 20 trials in the ipsilesional space. L.R. suffered a cortical-subcortical brain damage affecting the anterior components of the parietal-frontal attentional network and, compared with all other neglect and non-neglect patients, had additional lesion involvement of the medial anterior cingulate cortex (ACC) and of the adjacent sectors of the corpus callosum. In contrast to his lateralized motivational learning deficit, L.R. had no lateral bias in the early phases of attentional processing as he suffered no contralesional visual or auditory extinction on double simultaneous tachistoscopic and dichotic stimulation and detected, with no exception, single contralesional visual and auditory stimuli. In a separate study, we were able to compare L.R. with another RBD patient, G.P., who had a selective lesion in the right ACC, in the adjacent callosal connections and the medial-basal prefrontal cortex. G.P. had no contralesional neglect and displayed normal reward learning both in the left and right side of space. These findings show that contralesional reward learning is generally preserved in RBD patients with left spatial neglect and that this can be exploited in rehabilitation protocols. Contralesional reward learning is severely disrupted in neglect patients when an additional lesion of the ACC is present: however, as demonstrated by the comparison between L.R. and G.P. cases, selective unilateral lesion of the right ACC does not produce motivational neglect for the contralesional space. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Contribution of Auditory Learning Style to Students’ Mathematical Connection Ability

    NASA Astrophysics Data System (ADS)

    Karlimah; Risfiani, F.

    2017-09-01

    This paper presents the results of the research on the relation of mathematical concept with mathematics, other subjects, and with everyday life. This research reveals study result of the students who had auditory learning style and correlates it with their ability of mathematical connection. In this research, the researchers used a combination model or sequential exploratory design method, which is the use of qualitative and quantitative research methods in sequence. The result proves that giving learning facilities which are not suitable for the class whose students have the auditory learning style results in the barely sufficient math connection ability. The average mathematical connection ability of the auditory students was initially in the medium level of qualification. Then, the improvement in the form of the varied learning that suited the auditory learning style still showed the average ability of mathematical connection in medium level of qualification. Nevertheless, there was increase in the frequency of students in the medium level of qualification and decrease in the very low and low level of qualification. This suggests that the learning facilities, which are appropriate for the student’s auditory learning style, contribute well enough to the students’ mathematical connection ability. Therefore, the mathematics learning for students who have an auditory learning style should consist of particular activity that is understanding the concepts of mathematics and their relations.

  10. Forebrain pathway for auditory space processing in the barn owl.

    PubMed

    Cohen, Y E; Miller, G L; Knudsen, E I

    1998-02-01

    The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.

  11. The Role of Age and Executive Function in Auditory Category Learning

    PubMed Central

    Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath

    2015-01-01

    Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987

  12. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    PubMed

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  13. Behavioral semantics of learning and crossmodal processing in auditory cortex: the semantic processor concept.

    PubMed

    Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram

    2011-01-01

    Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of different excitatory and inhibitory mechanisms and to distinct spatiotemporal metrics of map activation to represent a sound. The described non-auditory firing and modulations of auditory responses suggest that auditory cortex, by collecting all necessary information, functions as a "semantic processor" deducing the task-specific meaning of sounds by learning. © 2010. Published by Elsevier B.V.

  14. Modality-specificity of Selective Attention Networks

    PubMed Central

    Stewart, Hannah J.; Amitay, Sygal

    2015-01-01

    Objective: To establish the modality specificity and generality of selective attention networks. Method: Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. Results: The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled “general attention.” The third component was labeled “auditory attention,” as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as “spatial orienting” and “spatial conflict,” respectively—they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task—all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). Conclusions: These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific. PMID:26635709

  15. A neural network model of ventriloquism effect and aftereffect.

    PubMed

    Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro

    2012-01-01

    Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  16. Statistical learning and auditory processing in children with music training: An ERP study.

    PubMed

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne

    2017-07-01

    The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  17. Auditory-Perceptual Learning Improves Speech Motor Adaptation in Children

    PubMed Central

    Shiller, Douglas M.; Rochon, Marie-Lyne

    2015-01-01

    Auditory feedback plays an important role in children’s speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback, however it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5–7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children’s ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation. PMID:24842067

  18. Differential Effects of Music and Video Gaming During Breaks on Auditory and Visual Learning.

    PubMed

    Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A

    2015-11-01

    The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning.

  19. Sleeping on the rubber-hand illusion: Memory reactivation during sleep facilitates multisensory recalibration.

    PubMed

    Honma, Motoyasu; Plass, John; Brang, David; Florczak, Susan M; Grabowecky, Marcia; Paller, Ken A

    2016-01-01

    Plasticity is essential in body perception so that physical changes in the body can be accommodated and assimilated. Multisensory integration of visual, auditory, tactile, and proprioceptive signals contributes both to conscious perception of the body's current state and to associated learning. However, much is unknown about how novel information is assimilated into body perception networks in the brain. Sleep-based consolidation can facilitate various types of learning via the reactivation of networks involved in prior encoding or through synaptic down-scaling. Sleep may likewise contribute to perceptual learning of bodily information by providing an optimal time for multisensory recalibration. Here we used methods for targeted memory reactivation (TMR) during slow-wave sleep to examine the influence of sleep-based reactivation of experimentally induced alterations in body perception. The rubber-hand illusion was induced with concomitant auditory stimulation in 24 healthy participants on 3 consecutive days. While each participant was sleeping in his or her own bed during intervening nights, electrophysiological detection of slow-wave sleep prompted covert stimulation with either the sound heard during illusion induction, a counterbalanced novel sound, or neither. TMR systematically enhanced feelings of bodily ownership after subsequent inductions of the rubber-hand illusion. TMR also enhanced spatial recalibration of perceived hand location in the direction of the rubber hand. This evidence for a sleep-based facilitation of a body-perception illusion demonstrates that the spatial recalibration of multisensory signals can be altered overnight to stabilize new learning of bodily representations. Sleep-based memory processing may thus constitute a fundamental component of body-image plasticity.

  20. Auditory spatial representations of the world are compressed in blind humans.

    PubMed

    Kolarik, Andrew J; Pardhan, Shahina; Cirstea, Silvia; Moore, Brian C J

    2017-02-01

    Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.

  1. Evidence for cue-independent spatial representation in the human auditory cortex during active listening.

    PubMed

    Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher

    2017-09-05

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.

  2. Evidence for cue-independent spatial representation in the human auditory cortex during active listening

    PubMed Central

    McLaughlin, Susan A.; Rinne, Teemu; Stecker, G. Christopher

    2017-01-01

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues—particularly interaural time and level differences (ITD and ILD)—that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and—critically—for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues. PMID:28827357

  3. Modulation of Auditory Cortex Response to Pitch Variation Following Training with Microtonal Melodies

    PubMed Central

    Zatorre, Robert J.; Delhommeau, Karine; Zarate, Jean Mary

    2012-01-01

    We tested changes in cortical functional response to auditory patterns in a configural learning paradigm. We trained 10 human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music) and measured covariation in blood oxygenation signal to increasing pitch interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature that was trained. A psychophysical staircase procedure with feedback was used for training over a 2-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch interval size, such that those who had a higher sensitivity to pitch interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities. PMID:23227019

  4. Effects of in-vehicle warning information displays with or without spatial compatibility on driving behaviors and response performance.

    PubMed

    Liu, Yung-Ching; Jhuang, Jing-Wun

    2012-07-01

    A driving simulator study was conducted to evaluate the effects of five in-vehicle warning information displays upon drivers' emergent response and decision performance. These displays include visual display, auditory displays with and without spatial compatibility, hybrid displays in both visual and auditory format with and without spatial compatibility. Thirty volunteer drivers were recruited to perform various tasks that involved driving, stimulus-response, divided attention and stress rating. Results show that for displays of single-modality, drivers benefited more when coping with visual display of warning information than auditory display with or without spatial compatibility. However, auditory display with spatial compatibility significantly improved drivers' performance in reacting to the divided attention task and making accurate S-R task decision. Drivers' best performance results were obtained for hybrid display with spatial compatibility. Hybrid displays enabled drivers to respond the fastest and achieve the best accuracy in both S-R and divided attention tasks. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1993-01-01

    A spatial auditory display was used to convolve speech stimuli, consisting of 130 different call signs used in the communications protocol of NASA's John F. Kennedy Space Center, to different virtual auditory positions. An adaptive staircase method was used to determine intelligibility levels of the signal against diotic speech babble, with spatial positions at 30 deg azimuth increments. Non-individualized, minimum-phase approximations of head-related transfer functions were used. The results showed a maximal intelligibility improvement of about 6 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  6. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    PubMed Central

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594

  7. Auditory access, language access, and implicit sequence learning in deaf children.

    PubMed

    Hall, Matthew L; Eigsti, Inge-Marie; Bortfeld, Heather; Lillo-Martin, Diane

    2018-05-01

    Developmental psychology plays a central role in shaping evidence-based best practices for prelingually deaf children. The Auditory Scaffolding Hypothesis (Conway et al., 2009) asserts that a lack of auditory stimulation in deaf children leads to impoverished implicit sequence learning abilities, measured via an artificial grammar learning (AGL) task. However, prior research is confounded by a lack of both auditory and language input. The current study examines implicit learning in deaf children who were (Deaf native signers) or were not (oral cochlear implant users) exposed to language from birth, and in hearing children, using both AGL and Serial Reaction Time (SRT) tasks. Neither deaf nor hearing children across the three groups show evidence of implicit learning on the AGL task, but all three groups show robust implicit learning on the SRT task. These findings argue against the Auditory Scaffolding Hypothesis, and suggest that implicit sequence learning may be resilient to both auditory and language deprivation, within the tested limits. A video abstract of this article can be viewed at: https://youtu.be/EeqfQqlVHLI [Correction added on 07 August 2017, after first online publication: The video abstract link was added.]. © 2017 John Wiley & Sons Ltd.

  8. Activity in Human Auditory Cortex Represents Spatial Separation Between Concurrent Sounds.

    PubMed

    Shiell, Martha M; Hausfeld, Lars; Formisano, Elia

    2018-05-23

    The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene. SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent. Copyright © 2018 the authors 0270-6474/18/384977-08$15.00/0.

  9. Incidental Auditory Category Learning

    PubMed Central

    Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.

    2015-01-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588

  10. Binaural speech processing in individuals with auditory neuropathy.

    PubMed

    Rance, G; Ryan, M M; Carew, P; Corben, L A; Yiu, E; Tan, J; Delatycki, M B

    2012-12-13

    Auditory neuropathy disrupts the neural representation of sound and may therefore impair processes contingent upon inter-aural integration. The aims of this study were to investigate binaural auditory processing in individuals with axonal (Friedreich ataxia) and demyelinating (Charcot-Marie-Tooth disease type 1A) auditory neuropathy and to evaluate the relationship between the degree of auditory deficit and overall clinical severity in patients with neuropathic disorders. Twenty-three subjects with genetically confirmed Friedreich ataxia and 12 subjects with Charcot-Marie-Tooth disease type 1A underwent psychophysical evaluation of basic auditory processing (intensity discrimination/temporal resolution) and binaural speech perception assessment using the Listening in Spatialized Noise test. Age, gender and hearing-level-matched controls were also tested. Speech perception in noise for individuals with auditory neuropathy was abnormal for each listening condition, but was particularly affected in circumstances where binaural processing might have improved perception through spatial segregation. Ability to use spatial cues was correlated with temporal resolution suggesting that the binaural-processing deficit was the result of disordered representation of timing cues in the left and right auditory nerves. Spatial processing was also related to overall disease severity (as measured by the Friedreich Ataxia Rating Scale and Charcot-Marie-Tooth Neuropathy Score) suggesting that the degree of neural dysfunction in the auditory system accurately reflects generalized neuropathic changes. Measures of binaural speech processing show promise for application in the neurology clinic. In individuals with auditory neuropathy due to both axonal and demyelinating mechanisms the assessment provides a measure of functional hearing ability, a biomarker capable of tracking the natural history of progressive disease and a potential means of evaluating the effectiveness of interventions. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  11. Egocentric and allocentric representations in auditory cortex

    PubMed Central

    Brimijoin, W. Owen; Bizley, Jennifer K.

    2017-01-01

    A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796

  12. Auditory peripersonal space in humans.

    PubMed

    Farnè, Alessandro; Làdavas, Elisabetta

    2002-10-01

    In the present study we report neuropsychological evidence of the existence of an auditory peripersonal space representation around the head in humans and its characteristics. In a group of right brain-damaged patients with tactile extinction, we found that a sound delivered near the ipsilesional side of the head (20 cm) strongly extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). By contrast, when an auditory stimulus was presented far from the head (70 cm), cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was most consistently found (i.e., both in the front and back spaces) when a complex sound was presented, like a white noise burst. Pure tones produced spatially specific cross-modal extinction when presented in the back space, but not in the front space. In addition, the most severe cross-modal extinction emerged when sounds came from behind the head, thus showing that the back space is more sensitive than the front space to the sensory interaction of auditory-tactile inputs. Finally, when cross-modal effects were investigated by reversing the spatial arrangement of cross-modal stimuli (i.e., touch on the right and sound on the left), we found that an ipsilesional tactile stimulus, although inducing a small amount of cross-modal tactile-auditory extinction, did not produce any spatial-specific effect. Therefore, the selective aspects of cross-modal interaction found near the head cannot be explained by a competition between a damaged left spatial representation and an intact right spatial representation. Thus, consistent with neurophysiological evidence from monkeys, our findings strongly support the existence, in humans, of an integrated cross-modal system coding auditory and tactile stimuli near the body, that is, in the peripersonal space.

  13. Teaching for Different Learning Styles.

    ERIC Educational Resources Information Center

    Cropper, Carolyn

    1994-01-01

    This study examined learning styles in 137 high ability fourth-grade students. All students were administered two learning styles inventories. Characteristics of students with the following learning styles are summarized: auditory language, visual language, auditory numerical, visual numerical, tactile concrete, individual learning, group…

  14. Evaluation of Domain-Specific Collaboration Interfaces for Team Command and Control Tasks

    DTIC Science & Technology

    2012-05-01

    Technologies 1.1.1. Virtual Whiteboard Cognitive theories relating the utilization, storage, and retrieval of verbal and spatial information, such as...AE Spatial emergent SE Auditory linguistic AL Spatial positional SP Facial figural FF Spatial quantitative SQ Facial motive FM Tactile figural...driven by the auditory linguistic (AL), short-term memory (STM), spatial attentive (SA), visual temporal (VT), and vocal process (V) subscales. 0

  15. An fMRI Study of the Neural Systems Involved in Visually Cued Auditory Top-Down Spatial and Temporal Attention

    PubMed Central

    Li, Chunlin; Chen, Kewei; Han, Hongbin; Chui, Dehua; Wu, Jinglong

    2012-01-01

    Top-down attention to spatial and temporal cues has been thoroughly studied in the visual domain. However, because the neural systems that are important for auditory top-down temporal attention (i.e., attention based on time interval cues) remain undefined, the differences in brain activity between directed attention to auditory spatial location (compared with time intervals) are unclear. Using fMRI (magnetic resonance imaging), we measured the activations caused by cue-target paradigms by inducing the visual cueing of attention to an auditory target within a spatial or temporal domain. Imaging results showed that the dorsal frontoparietal network (dFPN), which consists of the bilateral intraparietal sulcus and the frontal eye field, responded to spatial orienting of attention, but activity was absent in the bilateral frontal eye field (FEF) during temporal orienting of attention. Furthermore, the fMRI results indicated that activity in the right ventrolateral prefrontal cortex (VLPFC) was significantly stronger during spatial orienting of attention than during temporal orienting of attention, while the DLPFC showed no significant differences between the two processes. We conclude that the bilateral dFPN and the right VLPFC contribute to auditory spatial orienting of attention. Furthermore, specific activations related to temporal cognition were confirmed within the superior occipital gyrus, tegmentum, motor area, thalamus and putamen. PMID:23166800

  16. The impact of variation in low-frequency interaural cross correlation on auditory spatial imagery in stereophonic loudspeaker reproduction

    NASA Astrophysics Data System (ADS)

    Martens, William

    2005-04-01

    Several attributes of auditory spatial imagery associated with stereophonic sound reproduction are strongly modulated by variation in interaural cross correlation (IACC) within low frequency bands. Nonetheless, a standard practice in bass management for two-channel and multichannel loudspeaker reproduction is to mix low-frequency musical content to a single channel for reproduction via a single driver (e.g., a subwoofer). This paper reviews the results of psychoacoustic studies which support the conclusion that reproduction via multiple drivers of decorrelated low-frequency signals significantly affects such important spatial attributes as auditory source width (ASW), auditory source distance (ASD), and listener envelopment (LEV). A variety of methods have been employed in these tests, including forced choice discrimination and identification, and direct ratings of both global dissimilarity and distinct attributes. Contrary to assumptions that underlie industrial standards established in 1994 by ITU-R. Recommendation BS.775-1, these findings imply that substantial stereophonic spatial information exists within audio signals at frequencies below the 80 to 120 Hz range of prescribed subwoofer cutoff frequencies, and that loudspeaker reproduction of decorrelated signals at frequencies as low as 50 Hz can have an impact upon auditory spatial imagery. [Work supported by VRQ.

  17. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning

    PubMed Central

    Strait, Dana L.; Kraus, Nina

    2013-01-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians’ subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model by which to study mechanisms of experience-dependent changes in auditory function in humans. PMID:23988583

  18. Tracking the voluntary control of auditory spatial attention with event-related brain potentials.

    PubMed

    Störmer, Viola S; Green, Jessica J; McDonald, John J

    2009-03-01

    A lateralized event-related potential (ERP) component elicited by attention-directing cues (ADAN) has been linked to frontal-lobe control but is often absent when spatial attention is deployed in the auditory modality. Here, we tested the hypothesis that ERP activity associated with frontal-lobe control of auditory spatial attention is distributed bilaterally by comparing ERPs elicited by attention-directing cues and neutral cues in a unimodal auditory task. This revealed an initial ERP positivity over the anterior scalp and a later ERP negativity over the parietal scalp. Distributed source analysis indicated that the anterior positivity was generated primarily in bilateral prefrontal cortices, whereas the more posterior negativity was generated in parietal and temporal cortices. The anterior ERP positivity likely reflects frontal-lobe attentional control, whereas the subsequent ERP negativity likely reflects anticipatory biasing of activity in auditory cortex.

  19. Emergence of Spatial Stream Segregation in the Ascending Auditory Pathway.

    PubMed

    Yao, Justin D; Bremen, Peter; Middlebrooks, John C

    2015-12-09

    Stream segregation enables a listener to disentangle multiple competing sequences of sounds. A recent study from our laboratory demonstrated that cortical neurons in anesthetized cats exhibit spatial stream segregation (SSS) by synchronizing preferentially to one of two sequences of noise bursts that alternate between two source locations. Here, we examine the emergence of SSS along the ascending auditory pathway. Extracellular recordings were made in anesthetized rats from the inferior colliculus (IC), the nucleus of the brachium of the IC (BIN), the medial geniculate body (MGB), and the primary auditory cortex (A1). Stimuli consisted of interleaved sequences of broadband noise bursts that alternated between two source locations. At stimulus presentation rates of 5 and 10 bursts per second, at which human listeners report robust SSS, neural SSS is weak in the central nucleus of the IC (ICC), it appears in the nucleus of the brachium of the IC (BIN) and in approximately two-thirds of neurons in the ventral MGB (MGBv), and is prominent throughout A1. The enhancement of SSS at the cortical level reflects both increased spatial sensitivity and increased forward suppression. We demonstrate that forward suppression in A1 does not result from synaptic inhibition at the cortical level. Instead, forward suppression might reflect synaptic depression in the thalamocortical projection. Together, our findings indicate that auditory streams are increasingly segregated along the ascending auditory pathway as distinct mutually synchronized neural populations. Listeners are capable of disentangling multiple competing sequences of sounds that originate from distinct sources. This stream segregation is aided by differences in spatial location between the sources. A possible substrate of spatial stream segregation (SSS) has been described in the auditory cortex, but the mechanisms leading to those cortical responses are unknown. Here, we investigated SSS in three levels of the ascending auditory pathway with extracellular unit recordings in anesthetized rats. We found that neural SSS emerges within the ascending auditory pathway as a consequence of sharpening of spatial sensitivity and increasing forward suppression. Our results highlight brainstem mechanisms that culminate in SSS at the level of the auditory cortex. Copyright © 2015 Yao et al.

  20. Psychophysics and Neuronal Bases of Sound Localization in Humans

    PubMed Central

    Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.

    2013-01-01

    Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698

  1. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1994-01-01

    A spatial auditory display was designed for separating the multiple communication channels usually heard over one ear to different virtual auditory positions. The single 19 foot rack mount device utilizes digital filtering algorithms to separate up to four communication channels. The filters use four different binaural transfer functions, synthesized from actual outer ear measurements, to impose localization cues on the incoming sound. Hardware design features include 'fail-safe' operation in the case of power loss, and microphone/headset interfaces to the mobile launch communication system in use at KSC. An experiment designed to verify the intelligibility advantage of the display used 130 different call signs taken from the communications protocol used at NASA KSC. A 6 to 7 dB intelligibility advantage was found when multiple channels were spatially displayed, compared to monaural listening. The findings suggest that the use of a spatial auditory display could enhance both occupational and operational safety and efficiency of NASA operations.

  2. Spatial processing in the auditory cortex of the macaque monkey

    NASA Astrophysics Data System (ADS)

    Recanzone, Gregg H.

    2000-10-01

    The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.

  3. Elevated depressive symptoms enhance reflexive but not reflective auditory category learning.

    PubMed

    Maddox, W Todd; Chandrasekaran, Bharath; Smayda, Kirsten; Yi, Han-Gyol; Koslov, Seth; Beevers, Christopher G

    2014-09-01

    In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory learning and depression are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Elevated Depressive Symptoms Enhance Reflexive but not Reflective Auditory Category Learning

    PubMed Central

    Maddox, W. Todd; Chandrasekaran, Bharath; Smayda, Kirsten; Yi, Han-Gyol; Koslov, Seth; Beevers, Christopher G.

    2014-01-01

    In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory learning and depression are discussed. PMID:25041936

  5. Spatial gradient for unique-feature detection in patients with unilateral neglect: evidence from auditory and visual search.

    PubMed

    Eramudugolla, Ranmalee; Mattingley, Jason B

    2008-01-01

    Patients with unilateral spatial neglect following right hemisphere damage are impaired in detecting contralesional targets in both visual and haptic search tasks, and often show a graded improvement in detection performance for more ipsilesional spatial locations. In audition, multiple simultaneous sounds are most effectively perceived if they are distributed along the frequency dimension. Thus, attention to spectro-temporal features alone can allow detection of a target sound amongst multiple simultaneous distracter sounds, regardless of whether these sounds are spatially separated. Spatial bias in attention associated with neglect should not affect auditory search based on spectro-temporal features of a sound target. We report that a right brain damaged patient with neglect demonstrated a significant gradient favouring the ipsilesional side on a visual search task as well as an auditory search task in which the target was a frequency modulated tone amongst steady distractor tones. No such asymmetry was apparent in the auditory search performance of a control patient with a right hemisphere lesion but no neglect. The results suggest that the spatial bias in attention exhibited by neglect patients affects stimulus processing even when spatial information is irrelevant to the task.

  6. Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention

    PubMed Central

    Noppeney, Uta

    2018-01-01

    Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567

  7. Auditory spatial attention to speech and complex non-speech sounds in children with autism spectrum disorder.

    PubMed

    Soskey, Laura N; Allen, Paul D; Bennetto, Loisa

    2017-08-01

    One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  8. Short-term memory stores organized by information domain.

    PubMed

    Noyce, Abigail L; Cestero, Nishmar; Shinn-Cunningham, Barbara G; Somers, David C

    2016-04-01

    Vision and audition have complementary affinities, with vision excelling in spatial resolution and audition excelling in temporal resolution. Here, we investigated the relationships among the visual and auditory modalities and spatial and temporal short-term memory (STM) using change detection tasks. We created short sequences of visual or auditory items, such that each item within a sequence arose at a unique spatial location at a unique time. On each trial, two successive sequences were presented; subjects attended to either space (the sequence of locations) or time (the sequence of inter item intervals) and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal trials (both sequences presented in the same modality) and crossmodal trials (Sequence 1 visual, Sequence 2 auditory, or vice versa) for both spatial and temporal tasks. We found a strong interaction between modality and task: Spatial performance was best on unimodal visual trials, whereas temporal performance was best on unimodal auditory trials. The order of modalities on crossmodal trials also mattered, suggesting that perceptual fidelity at encoding is critical to STM. Critically, no cost was attributable to crossmodal comparison: In both tasks, performance on crossmodal trials was as good as or better than on the weaker unimodal trials. STM representations of space and time can guide change detection in either the visual or the auditory modality, suggesting that the temporal or spatial organization of STM may supersede sensory-specific organization.

  9. On the spatial specificity of audiovisual crossmodal exogenous cuing effects.

    PubMed

    Lee, Jae; Spence, Charles

    2017-06-01

    It is generally-accepted that the presentation of an auditory cue will direct an observer's spatial attention to the region of space from where it originates and therefore facilitate responses to visual targets presented there rather than from a different position within the cued hemifield. However, to date, there has been surprisingly limited evidence published in support of such within-hemifield crossmodal exogenous spatial cuing effects. Here, we report two experiments designed to investigate within- and between-hemifield spatial cuing effects in the case of audiovisual exogenous covert orienting. Auditory cues were presented from one of four frontal loudspeakers (two on either side of central fixation). There were eight possible visual target locations (one above and another below each of the loudspeakers). The auditory cues were evenly separated laterally by 30° in Experiment 1, and by 10° in Experiment 2. The potential cue and target locations were separated vertically by approximately 19° in Experiment 1, and by 4° in Experiment 2. On each trial, the participants made a speeded elevation (i.e., up vs. down) discrimination response to the visual target following the presentation of a spatially-nonpredictive auditory cue. Within-hemifield spatial cuing effects were observed only when the auditory cues were presented from the inner locations. Between-hemifield spatial cuing effects were observed in both experiments. Taken together, these results demonstrate that crossmodal exogenous shifts of spatial attention depend on the eccentricity of both the cue and target in a way that has not been made explicit by previous research. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Social interaction with a tutor modulates responsiveness of specific auditory neurons in juvenile zebra finches.

    PubMed

    Yanagihara, Shin; Yazaki-Sugiyama, Yoko

    2018-04-12

    Behavioral states of animals, such as observing the behavior of a conspecific, modify signal perception and/or sensations that influence state-dependent higher cognitive behavior, such as learning. Recent studies have shown that neuronal responsiveness to sensory signals is modified when animals are engaged in social interactions with others or in locomotor activities. However, how these changes produce state-dependent differences in higher cognitive function is still largely unknown. Zebra finches, which have served as the premier songbird model, learn to sing from early auditory experiences with tutors. They also learn from playback of recorded songs however, learning can be greatly improved when song models are provided through social communication with tutors (Eales, 1989; Chen et al., 2016). Recently we found a subset of neurons in the higher-level auditory cortex of juvenile zebra finches that exhibit highly selective auditory responses to the tutor song after song learning, suggesting an auditory memory trace of the tutor song (Yanagihara and Yazaki-Sugiyama, 2016). Here we show that auditory responses of these selective neurons became greater when juveniles were paired with their tutors, while responses of non-selective neurons did not change. These results suggest that social interaction modulates cortical activity and might function in state-dependent song learning. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Auditory Processing Learning Disability, Suicidal Ideation, and Transformational Faith

    ERIC Educational Resources Information Center

    Bailey, Frank S.; Yocum, Russell G.

    2015-01-01

    The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…

  12. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    NASA Astrophysics Data System (ADS)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.

  13. Effects of Secondary Task Modality and Processing Code on Automation Trust and Utilization During Simulated Airline Luggage Screening

    NASA Technical Reports Server (NTRS)

    Phillips, Rachel; Madhavan, Poornima

    2010-01-01

    The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence interaction with automation via significant effects on trust and system utilization. These findings have implications for both automation design and operator training.

  14. Auditory processing disorders, verbal disfluency, and learning difficulties: a case study.

    PubMed

    Jutras, Benoît; Lagacé, Josée; Lavigne, Annik; Boissonneault, Andrée; Lavoie, Charlen

    2007-01-01

    This case study reports the findings of auditory behavioral and electrophysiological measures performed on a graduate student (identified as LN) presenting verbal disfluency and learning difficulties. Results of behavioral audiological testing documented the presence of auditory processing disorders, particularly temporal processing and binaural integration. Electrophysiological test results, including middle latency, late latency and cognitive potentials, revealed that LN's central auditory system processes acoustic stimuli differently to a reference group with normal hearing.

  15. Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients.

    PubMed

    Golob, Edward J; Winston, Jenna; Mock, Jeffrey R

    2017-01-01

    Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory.

  16. Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients

    PubMed Central

    Golob, Edward J.; Winston, Jenna; Mock, Jeffrey R.

    2017-01-01

    Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory. PMID:29218024

  17. Visually induced plasticity of auditory spatial perception in macaques.

    PubMed

    Woods, Timothy M; Recanzone, Gregg H

    2004-09-07

    When experiencing spatially disparate visual and auditory stimuli, a common percept is that the sound originates from the location of the visual stimulus, an illusion known as the ventriloquism effect. This illusion can persist for tens of minutes, a phenomenon termed the ventriloquism aftereffect. The underlying neuronal mechanisms of this rapidly induced plasticity remain unclear; indeed, it remains untested whether similar multimodal interactions occur in other species. We therefore tested whether macaque monkeys experience the ventriloquism aftereffect similar to the way humans do. The ability of two monkeys to determine which side of the midline a sound was presented from was tested before and after a period of 20-60 min in which the monkeys experienced either spatially identical or spatially disparate auditory and visual stimuli. In agreement with human studies, the monkeys did experience a shift in their auditory spatial perception in the direction of the spatially disparate visual stimulus, and the aftereffect did not transfer across sounds that differed in frequency by two octaves. These results show that macaque monkeys experience the ventriloquism aftereffect similar to the way humans do in all tested respects, indicating that these multimodal interactions are a basic phenomenon of the central nervous system.

  18. Utilising reinforcement learning to develop strategies for driving auditory neural implants.

    PubMed

    Lee, Geoffrey W; Zambetta, Fabio; Li, Xiaodong; Paolini, Antonio G

    2016-08-01

    In this paper we propose a novel application of reinforcement learning to the area of auditory neural stimulation. We aim to develop a simulation environment which is based off real neurological responses to auditory and electrical stimulation in the cochlear nucleus (CN) and inferior colliculus (IC) of an animal model. Using this simulator we implement closed loop reinforcement learning algorithms to determine which methods are most effective at learning effective acoustic neural stimulation strategies. By recording a comprehensive set of acoustic frequency presentations and neural responses from a set of animals we created a large database of neural responses to acoustic stimulation. Extensive electrical stimulation in the CN and the recording of neural responses in the IC provides a mapping of how the auditory system responds to electrical stimuli. The combined dataset is used as the foundation for the simulator, which is used to implement and test learning algorithms. Reinforcement learning, utilising a modified n-Armed Bandit solution, is implemented to demonstrate the model's function. We show the ability to effectively learn stimulation patterns which mimic the cochlea's ability to covert acoustic frequencies to neural activity. Time taken to learn effective replication using neural stimulation takes less than 20 min under continuous testing. These results show the utility of reinforcement learning in the field of neural stimulation. These results can be coupled with existing sound processing technologies to develop new auditory prosthetics that are adaptable to the recipients current auditory pathway. The same process can theoretically be abstracted to other sensory and motor systems to develop similar electrical replication of neural signals.

  19. 3D hierarchical spatial representation and memory of multimodal sensory data

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine/robot degrees of freedom, the desired movements and action can be computed from these different levels in the hierarchy. The most basic embodiment of this machine could be a pan-tilt camera system, an array of microphones, a machine with arm/hand like structure or/and a robot with some or all of the above capabilities. We describe the approach, system and present preliminary results on a real-robotic platform.

  20. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    PubMed

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.

  1. Relationships between Visual and Auditory Perceptual Skills and Comprehension in Students with Learning Disabilities.

    ERIC Educational Resources Information Center

    Weaver, Phyllis A.; Rosner, Jerome

    1979-01-01

    Scores of 25 learning disabled students (aged 9 to 13) were compared on five tests: a visual-perceptual test (Coloured Progressive Matrices); an auditory-perceptual test (Auditory Motor Placement); a listening and reading comprehension test (Durrell Listening-Reading Series); and a word recognition test (Word Recognition subtest, Diagnostic…

  2. Effect of FM Auditory Trainers on Attending Behaviors of Learning-Disabled Children.

    ERIC Educational Resources Information Center

    Blake, Ruth; And Others

    1991-01-01

    This study investigated the effect of FM (frequency modulation) auditory trainer use on attending behaviors of 36 students (ages 5-10) with learning disabilities. Children wearing the auditory trainers scored better than control students on eye contact, having body turned toward sound source, and absence of extraneous body movement and vocal…

  3. Comparison of Auditory/Visual and Visual/Motor Practice on the Spelling Accuracy of Learning Disabled Children.

    ERIC Educational Resources Information Center

    Aleman, Cheryl; And Others

    1990-01-01

    Compares auditory/visual practice to visual/motor practice in spelling with seven elementary school learning-disabled students enrolled in a resource room setting. Finds that the auditory/visual practice was superior to the visual/motor practice on the weekly spelling performance for all seven students. (MG)

  4. Investigating Verbal and Visual Auditory Learning After Conformal Radiation Therapy for Childhood Ependymoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Pinto, Marcos; Conklin, Heather M.; Li Chenghong

    Purpose: The primary objective of this study was to determine whether children with localized ependymoma experience a decline in verbal or visual-auditory learning after conformal radiation therapy (CRT). The secondary objective was to investigate the impact of age and select clinical factors on learning before and after treatment. Methods and Materials: Learning in a sample of 71 patients with localized ependymoma was assessed with the California Verbal Learning Test (CVLT-C) and the Visual-Auditory Learning Test (VAL). Learning measures were administered before CRT, at 6 months, and then yearly for a total of 5 years. Results: There was no significant declinemore » on measures of verbal or visual-auditory learning after CRT; however, younger age, more surgeries, and cerebrospinal fluid shunting did predict lower scores at baseline. There were significant longitudinal effects (improved learning scores after treatment) among older children on the CVLT-C and children that did not receive pre-CRT chemotherapy on the VAL. Conclusion: There was no evidence of global decline in learning after CRT in children with localized ependymoma. Several important implications from the findings include the following: (1) identification of and differentiation among variables with transient vs. long-term effects on learning, (2) demonstration that children treated with chemotherapy before CRT had greater risk of adverse visual-auditory learning performance, and (3) establishment of baseline and serial assessment as critical in ascertaining necessary sensitivity and specificity for the detection of modest effects.« less

  5. Brain correlates of the orientation of auditory spatial attention onto speaker location in a "cocktail-party" situation.

    PubMed

    Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan

    2016-10-01

    Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.

  6. Technologies That Capitalize on Study Skills with Learning Style Strengths

    ERIC Educational Resources Information Center

    Howell, Dusti D.

    2008-01-01

    This article addresses the tools available in the rapidly changing digital learning environment and offers a variety of approaches for how they can assist students with visual, auditory, or kinesthetic learning strengths. Teachers can use visual, auditory, and kinesthetic assessment tests to identify learning preferences and then recommend…

  7. Perceptual Learning Style and Learning Proficiency: A Test of the Hypothesis

    ERIC Educational Resources Information Center

    Kratzig, Gregory P.; Arbuthnott, Katherine D.

    2006-01-01

    Given the potential importance of using modality preference with instruction, the authors tested whether learning style preference correlated with memory performance in each of 3 sensory modalities: visual, auditory, and kinesthetic. In Study 1, participants completed objective measures of pictorial, auditory, and tactile learning and learning…

  8. Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex

    PubMed Central

    Romanski, L. M.; Tian, B.; Fritz, J.; Mishkin, M.; Goldman-Rakic, P. S.; Rauschecker, J. P.

    2009-01-01

    ‘What’ and ‘where’ visual streams define ventrolateral object and dorsolateral spatial processing domains in the prefrontal cortex of nonhuman primates. We looked for similar streams for auditory–prefrontal connections in rhesus macaques by combining microelectrode recording with anatomical tract-tracing. Injection of multiple tracers into physiologically mapped regions AL, ML and CL of the auditory belt cortex revealed that anterior belt cortex was reciprocally connected with the frontal pole (area 10), rostral principal sulcus (area 46) and ventral prefrontal regions (areas 12 and 45), whereas the caudal belt was mainly connected with the caudal principal sulcus (area 46) and frontal eye fields (area 8a). Thus separate auditory streams originate in caudal and rostral auditory cortex and target spatial and non-spatial domains of the frontal lobe, respectively. PMID:10570492

  9. Interdependent encoding of pitch, timbre and spatial location in auditory cortex

    PubMed Central

    Bizley, Jennifer K.; Walker, Kerry M. M.; Silverman, Bernard W.; King, Andrew J.; Schnupp, Jan W. H.

    2009-01-01

    Because we can perceive the pitch, timbre and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from non-spatial attributes. Indeed, recent studies support the existence of anatomically segregated ‘what’ and ‘where’ cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and non-spatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Whilst indicating that neural encoding of pitch, location and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and non-spatial cues at higher cortical levels. Some units exhibited significant non-linear interactions between particular combinations of pitch, timbre and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and non-spatial attributes. Such non-linearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects. PMID:19228960

  10. Physical fitness modulates incidental but not intentional statistical learning of simultaneous auditory sequences during concurrent physical exercise.

    PubMed

    Daikoku, Tatsuya; Takahashi, Yuji; Futagami, Hiroko; Tarumoto, Nagayoshi; Yasuda, Hideki

    2017-02-01

    In real-world auditory environments, humans are exposed to overlapping auditory information such as those made by human voices and musical instruments even during routine physical activities such as walking and cycling. The present study investigated how concurrent physical exercise affects performance of incidental and intentional learning of overlapping auditory streams, and whether physical fitness modulates the performances of learning. Participants were grouped with 11 participants with lower and higher fitness each, based on their Vo 2 max value. They were presented simultaneous auditory sequences with a distinct statistical regularity each other (i.e. statistical learning), while they were pedaling on the bike and seating on a bike at rest. In experiment 1, they were instructed to attend to one of the two sequences and ignore to the other sequence. In experiment 2, they were instructed to attend to both of the two sequences. After exposure to the sequences, learning effects were evaluated by familiarity test. In the experiment 1, performance of statistical learning of ignored sequences during concurrent pedaling could be higher in the participants with high than low physical fitness, whereas in attended sequence, there was no significant difference in performance of statistical learning between high than low physical fitness. Furthermore, there was no significant effect of physical fitness on learning while resting. In the experiment 2, the both participants with high and low physical fitness could perform intentional statistical learning of two simultaneous sequences in the both exercise and rest sessions. The improvement in physical fitness might facilitate incidental but not intentional statistical learning of simultaneous auditory sequences during concurrent physical exercise.

  11. Auditory-vocal mirroring in songbirds.

    PubMed

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  12. The Influence of Tactile Cognitive Maps on Auditory Space Perception in Sighted Persons.

    PubMed

    Tonelli, Alessia; Gori, Monica; Brayda, Luca

    2016-01-01

    We have recently shown that vision is important to improve spatial auditory cognition. In this study, we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular, we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people - one experimental and one control group - in an auditory space bisection task. In the first group, the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound propagation.

  13. Using Neuroplasticity-Based Auditory Training to Improve Verbal Memory in Schizophrenia

    PubMed Central

    Fisher, Melissa; Holland, Christine; Merzenich, Michael M.; Vinogradov, Sophia

    2009-01-01

    Objective Impaired verbal memory in schizophrenia is a key rate-limiting factor for functional outcome, does not respond to currently available medications, and shows only modest improvement after conventional behavioral remediation. The authors investigated an innovative approach to the remediation of verbal memory in schizophrenia, based on principles derived from the basic neuroscience of learning-induced neuroplasticity. The authors report interim findings in this ongoing study. Method Fifty-five clinically stable schizophrenia subjects were randomly assigned to either 50 hours of computerized auditory training or a control condition using computer games. Those receiving auditory training engaged in daily computerized exercises that placed implicit, increasing demands on auditory perception through progressively more difficult auditory-verbal working memory and verbal learning tasks. Results Relative to the control group, subjects who received active training showed significant gains in global cognition, verbal working memory, and verbal learning and memory. They also showed reliable and significant improvement in auditory psychophysical performance; this improvement was significantly correlated with gains in verbal working memory and global cognition. Conclusions Intensive training in early auditory processes and auditory-verbal learning results in substantial gains in verbal cognitive processes relevant to psychosocial functioning in schizophrenia. These gains may be due to a training method that addresses the early perceptual impairments in the illness, that exploits intact mechanisms of repetitive practice in schizophrenia, and that uses an intensive, adaptive training approach. PMID:19448187

  14. [Identification of auditory laterality by means of a new dichotic digit test in Spanish, and body laterality and spatial orientation in children with dyslexia and in controls].

    PubMed

    Olivares-García, M R; Peñaloza-López, Y R; García-Pedroza, F; Jesús-Pérez, S; Uribe-Escamilla, R; Jiménez-de la Sancha, S

    In this study, a new dichotic digit test in Spanish (NDDTS) was applied in order to identify auditory laterality. We also evaluated body laterality and spatial location using the Subirana test. Both the dichotic test and the Subirana test for body laterality and spatial location were applied in a group of 40 children with dyslexia and in a control group made up of 40 children who were paired according to age and gender. The results of the three evaluations were analysed using the SPSS 10 software application, with Pearson's chi-squared test. It was seen that 42.5% of the children in the group of dyslexics had mixed auditory laterality, compared to 7.5% in the control group (p < or = 0.05). Body laterality was mixed in 25% of dyslexic children and in 2.5% in the control group (p < or = 0.05) and there was 72.5% spatial disorientation in the group of dyslexics, whereas only 15% (p < or = 0.05) was found in the control group. The NDDTS proved to be a useful tool for demonstrating that mixed auditory laterality and auditory predominance of the left ear are linked to dyslexia. The results of this test exceed those obtained for body laterality. Spatial orientation is indeed altered in children with dyslexia. The importance of this finding makes it necessary to study the central auditory processes in all cases in order to define better rehabilitation strategies in Spanish-speaking children.

  15. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. At the interface of the auditory and vocal motor systems: NIf and its role in vocal processing, production and learning.

    PubMed

    Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H R; Schmidt, Marc

    2013-06-01

    Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC's auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf's involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. At the interface of the auditory and vocal motor systems: NIf and its role in vocal processing, production and learning

    PubMed Central

    Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H.R.; Schmidt, Marc

    2015-01-01

    Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC’s auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf’s involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans. PMID:23603062

  18. A Perceptuo-Cognitive-Motor Approach to the Special Child.

    ERIC Educational Resources Information Center

    Kornblum, Rena Beth

    A movement therapist reviews ways in which a perceptuo-cognitive approach can help handicapped children in learning and in social adjustment. She identifies specific auditory problems (hearing loss, sound-ground confusion, auditory discrimination, auditory localization, auditory memory, auditory sequencing), visual problems (visual acuity,…

  19. "Where do auditory hallucinations come from?"--a brain morphometry study of schizophrenia patients with inner or outer space hallucinations.

    PubMed

    Plaze, Marion; Paillère-Martinot, Marie-Laure; Penttilä, Jani; Januel, Dominique; de Beaurepaire, Renaud; Bellivier, Franck; Andoh, Jamila; Galinowski, André; Gallarda, Thierry; Artiges, Eric; Olié, Jean-Pierre; Mangin, Jean-François; Martinot, Jean-Luc; Cachia, Arnaud

    2011-01-01

    Auditory verbal hallucinations are a cardinal symptom of schizophrenia. Bleuler and Kraepelin distinguished 2 main classes of hallucinations: hallucinations heard outside the head (outer space, or external, hallucinations) and hallucinations heard inside the head (inner space, or internal, hallucinations). This distinction has been confirmed by recent phenomenological studies that identified 3 independent dimensions in auditory hallucinations: language complexity, self-other misattribution, and spatial location. Brain imaging studies in schizophrenia patients with auditory hallucinations have already investigated language complexity and self-other misattribution, but the neural substrate of hallucination spatial location remains unknown. Magnetic resonance images of 45 right-handed patients with schizophrenia and persistent auditory hallucinations and 20 healthy right-handed subjects were acquired. Two homogeneous subgroups of patients were defined based on the hallucination spatial location: patients with only outer space hallucinations (N=12) and patients with only inner space hallucinations (N=15). Between-group differences were then assessed using 2 complementary brain morphometry approaches: voxel-based morphometry and sulcus-based morphometry. Convergent anatomical differences were detected between the patient subgroups in the right temporoparietal junction (rTPJ). In comparison to healthy subjects, opposite deviations in white matter volumes and sulcus displacements were found in patients with inner space hallucination and patients with outer space hallucination. The current results indicate that spatial location of auditory hallucinations is associated with the rTPJ anatomy, a key region of the "where" auditory pathway. The detected tilt in the sulcal junction suggests deviations during early brain maturation, when the superior temporal sulcus and its anterior terminal branch appear and merge.

  20. Auditory Processing, Linguistic Prosody Awareness, and Word Reading in Mandarin-Speaking Children Learning English

    ERIC Educational Resources Information Center

    Chung, Wei-Lun; Jarmulowicz, Linda; Bidelman, Gavin M.

    2017-01-01

    This study examined language-specific links among auditory processing, linguistic prosody awareness, and Mandarin (L1) and English (L2) word reading in 61 Mandarin-speaking, English-learning children. Three auditory discrimination abilities were measured: pitch contour, pitch interval, and rise time (rate of intensity change at tone onset).…

  1. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    PubMed

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  2. Landau-Kleffner Syndrome

    MedlinePlus

    ... difficult to diagnose and may be misdiagnosed as autism, pervasive developmental disorder, hearing impairment, learning disability, auditory/ ... difficult to diagnose and may be misdiagnosed as autism, pervasive developmental disorder, hearing impairment, learning disability, auditory/ ...

  3. Sensory Coding and Sensitivity to Local Estrogens Shift during Critical Period Milestones in the Auditory Cortex of Male Songbirds.

    PubMed

    Vahaba, Daniel M; Macedo-Lima, Matheus; Remage-Healey, Luke

    2017-01-01

    Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor's song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM's established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E 2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches ( Taeniopygia guttata ) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E 2 administration on sensory processing. In sensory-aged subjects, E 2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E 2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E 2 sensitivity that each precisely track a key neural "switch point" from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds.

  4. Sensory Coding and Sensitivity to Local Estrogens Shift during Critical Period Milestones in the Auditory Cortex of Male Songbirds

    PubMed Central

    2017-01-01

    Abstract Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor’s song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM’s established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches (Taeniopygia guttata) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E2 administration on sensory processing. In sensory-aged subjects, E2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E2 sensitivity that each precisely track a key neural “switch point” from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds. PMID:29255797

  5. Spatial Cues Provided by Sound Improve Postural Stabilization: Evidence of a Spatial Auditory Map?

    PubMed Central

    Gandemer, Lennie; Parseihian, Gaetan; Kronland-Martinet, Richard; Bourdin, Christophe

    2017-01-01

    It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize. PMID:28694770

  6. Patterns of Auditory Perception Skills in Children with Learning Disabilities: A Computer-Assisted Approach.

    ERIC Educational Resources Information Center

    Pressman, E.; And Others

    1986-01-01

    The auditory receptive language skills of 40 learning disabled (LD) and 40 non-disabled boys (all 7 - 11 years old) were assessed via computerized versions of subtests of the Goldman-Fristoe-Woodcock Auditory Skills Test Battery. The computerized assessment correctly identified 92.5% of the LD group and 65% of the normal control children. (DB)

  7. Is the Role of External Feedback in Auditory Skill Learning Age Dependent?

    ERIC Educational Resources Information Center

    Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat

    2017-01-01

    Purpose: The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Method: Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for…

  8. Aurally aided visual search performance in a dynamic environment

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

    2008-04-01

    Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

  9. Sensing Super-position: Visual Instrument Sensor Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2006-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.

  10. The head-centered meridian effect: auditory attention orienting in conditions of impaired visuo-spatial information.

    PubMed

    Olivetti Belardinelli, Marta; Santangelo, Valerio

    2005-07-08

    This paper examines the characteristics of spatial attention orienting in situations of visual impairment. Two groups of subjects, respectively schizophrenic and blind, with different degrees of visual spatial information impairment, were tested. In Experiment 1, the schizophrenic subjects were instructed to detect an auditory target, which was preceded by a visual cue. The cue could appear in the same location as the target, separated from it respectively by the vertical visual meridian (VM), the vertical head-centered meridian (HCM) or another meridian. Similarly to normal subjects tested with the same paradigm (Ferlazzo, Couyoumdjian, Padovani, and Olivetti Belardinelli, 2002), schizophrenic subjects showed slower reactions times (RTs) when cued, and when the target locations were on the opposite sides of the HCM. This HCM effect strengthens the assumption that different auditory and visual spatial maps underlie the representation of attention orienting mechanisms. In Experiment 2, blind subjects were asked to detect an auditory target, which had been preceded by an auditory cue, while staring at an imaginary point. The point was located either to the left or to the right, in order to control for ocular movements and maintain the dissociation between the HCM and the VM. Differences between crossing and no-crossing conditions of HCM were not found. Therefore it is possible to consider the HCM effect as a consequence of the interaction between visual and auditory modalities. Related theoretical issues are also discussed.

  11. Effects of laterality and pitch height of an auditory accessory stimulus on horizontal response selection: the Simon effect and the SMARC effect.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2009-08-01

    In the present article, we investigated the effects of pitch height and the presented ear (laterality) of an auditory stimulus, irrelevant to the ongoing visual task, on horizontal response selection. Performance was better when the response and the stimulated ear spatially corresponded (Simon effect), and when the spatial-musical association of response codes (SMARC) correspondence was maintained-that is, right (left) response with a high-pitched (low-pitched) tone. These findings reveal an automatic activation of spatially and musically associated responses by task-irrelevant auditory accessory stimuli. Pitch height is strong enough to influence the horizontal responses despite modality differences with task target.

  12. Emotional Intelligence among Auditory, Reading, and Kinesthetic Learning Styles of Elementary School Students in Ambon-Indonesia

    ERIC Educational Resources Information Center

    Leasa, Marleny; Corebima, Aloysius D.; Ibrohim; Suwono, Hadi

    2017-01-01

    Students have unique ways in managing the information in their learning process. VARK learning styles associated with memory are considered to have an effect on emotional intelligence. This quasi-experimental research was conducted to compare the emotional intelligence among the students having auditory, reading, and kinesthetic learning styles in…

  13. Visual and Auditory Learning Processes in Normal Children and Children with Specific Learning Disabilities. Final Report.

    ERIC Educational Resources Information Center

    McGrady, Harold J.; Olson, Don A.

    To describe and compare the psychosensory functioning of normal children and children with specific learning disabilities, 62 learning disabled and 68 normal children were studied. Each child was given a battery of thirteen subtests on an automated psychosensory system representing various combinations of auditory and visual intra- and…

  14. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.

    PubMed

    Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  15. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training

    PubMed Central

    Bernstein, Lynne E.; Auer, Edward T.; Eberhardt, Silvio P.; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning. PMID:23515520

  16. Statistics of natural binaural sounds.

    PubMed

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  17. Statistics of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658

  18. Potential for using visual, auditory, and olfactory cues to manage foraging behaviour and spatial distribution of rangeland livestock

    USDA-ARS?s Scientific Manuscript database

    This paper reviews the literature and reports on the current state of knowledge regarding the potential for managers to use visual (VC), auditory (AC), and olfactory (OC) cues to manage foraging behavior and spatial distribution of rangeland livestock. We present evidence that free-ranging livestock...

  19. LAMP: 100+ Systematic Exercise Lessons for Developing Linguistic Auditory Memory Patterns in Beginning Readers.

    ERIC Educational Resources Information Center

    Valett, Robert E.

    Research findings on auditory sequencing and auditory blending and fusion, auditory-visual integration, and language patterns are presented in support of the Linguistic Auditory Memory Patterns (LAMP) program. LAMP consists of 100 developmental lessons for young students with learning disabilities or language problems. The lessons are included in…

  20. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    PubMed Central

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  1. [Functional anatomy of the cochlear nerve and the central auditory system].

    PubMed

    Simon, E; Perrot, X; Mertens, P

    2009-04-01

    The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.

  2. Serial and Parallel Processing in the Primate Auditory Cortex Revisited

    PubMed Central

    Recanzone, Gregg H.; Cohen, Yale E.

    2009-01-01

    Over a decade ago it was proposed that the primate auditory cortex is organized in a serial and parallel manner in which there is a dorsal stream processing spatial information and a ventral stream processing non-spatial information. This organization is similar to the “what”/“where” processing of the primate visual cortex. This review will examine several key studies, primarily electrophysiological, that have tested this hypothesis. We also review several human imaging studies that have attempted to define these processing streams in the human auditory cortex. While there is good evidence that spatial information is processed along a particular series of cortical areas, the support for a non-spatial processing stream is not as strong. Why this should be the case and how to better test this hypothesis is also discussed. PMID:19686779

  3. Learning styles of medical students - implications in education.

    PubMed

    Buşan, Alina-Mihaela

    2014-01-01

    The term "learning style" refers to the fact that each person has a different way of accumulating knowledge. While some prefer listening to learn better, others need to write or they only need to read the text or see a picture to later remember. According to Fleming and Mills the learning styles can be classified in Visual, Auditory and Kinesthetic. There is no evidence that teaching according to the learning style can help a person, yet this cannot be ignored. In this study, a number of 230 medical students were questioned in order to determine their learning style. We determined that 73% of the students prefer one learning style, 22% prefer to learn using equally two learning style, while the rest prefer three learning styles. According to this study the distribution of the learning styles is as following: 33% visual, 26% auditory, 14% kinesthetic, 12% visual and auditory styles equally, 6% visual and kinesthetic, 4% auditory and kinesthetic and 5% all three styles. 32 % of the students that participated at this study are from UMF Craiova, 32% from UMF Carol Davila, 11% University of Medicine T Popa, Iasi, 9% UMF Cluj Iulius Hatieganu. The way medical students learn is different from the general population. This is why it is important when teaching to considerate how the students learn in order to facilitate the learning.

  4. Learning Styles of Medical Students - Implications in Education

    PubMed Central

    BUŞAN, ALINA-MIHAELA

    2014-01-01

    Background: The term “learning style” refers to the fact that each person has a different way of accumulating knowledge. While some prefer listening to learn better, others need to write or they only need to read the text or see a picture to later remember. According to Fleming and Mills the learning styles can be classified in Visual, Auditory and Kinesthetic. There is no evidence that teaching according to the learning style can help a person, yet this cannot be ignored. Subjects and methods: In this study, a number of 230 medical students were questioned in order to determine their learning style. Results: We determined that 73% of the students prefer one learning style, 22% prefer to learn using equally two learning style, while the rest prefer three learning styles. According to this study the distribution of the learning styles is as following: 33% visual, 26% auditory, 14% kinesthetic, 12% visual and auditory styles equally, 6% visual and kinesthetic, 4% auditory and kinesthetic and 5% all three styles. 32 % of the students that participated at this study are from UMF Craiova, 32% from UMF Carol Davila, 11% University of Medicine T Popa, Iasi, 9% UMF Cluj Iulius Hatieganu. Discussions: The way medical students learn is different from the general population. This is why it is important when teaching to considerate how the students learn in order to facilitate the learning PMID:25729590

  5. Functional Connectivity between Face-Movement and Speech-Intelligibility Areas during Auditory-Only Speech Perception

    PubMed Central

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026

  6. Sensory Processing of Backward-Masking Signals in Children with Language-Learning Impairment as Assessed with the Auditory Brainstem Response.

    ERIC Educational Resources Information Center

    Marler, Jeffrey A.; Champlin, Craig A.

    2005-01-01

    The purpose of this study was to examine the possible contribution of sensory mechanisms to an auditory processing deficit shown by some children with language-learning impairment (LLI). Auditory brainstem responses (ABRs) were measured from 2 groups of school-aged (8-10 years) children. One group consisted of 10 children with LLI, and the other…

  7. A Comparison of Visual and Auditory Processing Tests on the Woodcock-Johnson Tests of Cognitive Ability, Revised and the Learning Efficiency Test-II.

    ERIC Educational Resources Information Center

    Bolen, L. M.; Kimball, D. J.; Hall, C. W.; Webster, R. E.

    1997-01-01

    Compares the visual and auditory processing factors of the Woodcock Johnson Tests of Cognitive Ability, Revised (WJR COG) and the visual and auditory memory factors of the Learning Efficiency Test, II (LET-II) among 120 college students. Results indicate two significant performance differences between the WJR COG and LET-II. (RJM)

  8. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  9. Technical aspects of a demonstration tape for three-dimensional sound displays

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1990-01-01

    This document was developed to accompany an audio cassette that demonstrates work in three-dimensional auditory displays, developed at the Ames Research Center Aerospace Human Factors Division. It provides a text version of the audio material, and covers the theoretical and technical issues of spatial auditory displays in greater depth than on the cassette. The technical procedures used in the production of the audio demonstration are documented, including the methods for simulating rotorcraft radio communication, synthesizing auditory icons, and using the Convolvotron, a real-time spatialization device.

  10. Auditory temporal perceptual learning and transfer in Chinese-speaking children with developmental dyslexia.

    PubMed

    Zhang, Manli; Xie, Weiyi; Xu, Yanzhi; Meng, Xiangzhi

    2018-03-01

    Perceptual learning refers to the improvement of perceptual performance as a function of training. Recent studies found that auditory perceptual learning may improve phonological skills in individuals with developmental dyslexia in alphabetic writing system. However, whether auditory perceptual learning could also benefit the reading skills of those learning the Chinese logographic writing system is, as yet, unknown. The current study aimed to investigate the remediation effect of auditory temporal perceptual learning on Mandarin-speaking school children with developmental dyslexia. Thirty children with dyslexia were screened from a large pool of students in 3th-5th grades. They completed a series of pretests and then were assigned to either a non-training control group or a training group. The training group worked on a pure tone duration discrimination task for 7 sessions over 2 weeks with thirty minutes per session. Post-tests immediately after training and a follow-up test 2 months later were conducted. Analyses revealed a significant training effect in the training group relative to non-training group, as well as near transfer to the temporal interval discrimination task and far transfer to phonological awareness, character recognition and reading fluency. Importantly, the training effect and all the transfer effects were stable at the 2-month follow-up session. Further analyses found that a significant correlation between character recognition performance and learning rate mainly existed in the slow learning phase, the consolidation stage of perceptual learning, and this effect was modulated by an individuals' executive function. These findings indicate that adaptive auditory temporal perceptual learning can lead to learning and transfer effects on reading performance, and shed further light on the potential role of basic perceptual learning in the remediation and prevention of developmental dyslexia. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Contrast Enhancement without Transient Map Expansion for Species-Specific Vocalizations in Core Auditory Cortex during Learning.

    PubMed

    Shepard, Kathryn N; Chong, Kelly K; Liu, Robert C

    2016-01-01

    Tonotopic map plasticity in the adult auditory cortex (AC) is a well established and oft-cited measure of auditory associative learning in classical conditioning paradigms. However, its necessity as an enduring memory trace has been debated, especially given a recent finding that the areal expansion of core AC tuned to a newly relevant frequency range may arise only transiently to support auditory learning. This has been reinforced by an ethological paradigm showing that map expansion is not observed for ultrasonic vocalizations (USVs) or for ultrasound frequencies in postweaning dams for whom USVs emitted by pups acquire behavioral relevance. However, whether transient expansion occurs during maternal experience is not known, and could help to reveal the generality of cortical map expansion as a correlate for auditory learning. We thus mapped the auditory cortices of maternal mice at postnatal time points surrounding the peak in pup USV emission, but found no evidence of frequency map expansion for the behaviorally relevant high ultrasound range in AC. Instead, regions tuned to low frequencies outside of the ultrasound range show progressively greater suppression of activity in response to the playback of ultrasounds or pup USVs for maternally experienced animals assessed at their pups' postnatal day 9 (P9) to P10, or postweaning. This provides new evidence for a lateral-band suppression mechanism elicited by behaviorally meaningful USVs, likely enhancing their population-level signal-to-noise ratio. These results demonstrate that tonotopic map enlargement has limits as a construct for conceptualizing how experience leaves neural memory traces within sensory cortex in the context of ethological auditory learning.

  12. A selective impairment of perception of sound motion direction in peripheral space: A case study.

    PubMed

    Thaler, Lore; Paciocco, Joseph; Daley, Mark; Lesniak, Gabriella D; Purcell, David W; Fraser, J Alexander; Dutton, Gordon N; Rossit, Stephanie; Goodale, Melvyn A; Culham, Jody C

    2016-01-08

    It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Rapid Auditory Processing and Learning Deficits in Rats With P1 versus P7 Neonatal Hypoxic-Ischemic Injury

    PubMed Central

    McClure, Melissa M.; Threlkeld, Steven W.; Rosen, Glenn D.; Fitch, R. Holly

    2014-01-01

    Hypoxia-ischemia (HI) is associated with premature birth, and injury during term birth. Many infants experiencing HI later show disruptions of language, with research suggesting that rapid auditory processing (RAP) deficits, or impairments in the ability to discriminate rapidly changing acoustic signals, play a causal role in emergent language problems. We recently bridged these lines of research by showing RAP deficits in rats with unilateral-HI injury induced on postnatal day 1, 7, or 10 (P1, P7, or P10; 23). While robust RAP deficits were found in HI animals, it was suggested that our within-age sample size did not provide us with sufficient power to detect Age-at-injury differences within HI groups. The current study sought to examine differences in neuropathology and behavior following unilateral-HI injury in P1 vs. P7 pups. Ages chosen for HI induction reflect differential stages of neurodevelopmental maturity, and subsequent regional differences in vulnerability to reduced blood flow/oxygen (modeling premature/term HI injury). Results showed that during the juvenile period, both P1 and P7 HI groups exhibited significant RAP deficits, but the deficit in the P1 HI group resolved with repeated testing (compared to shams). However, P7 HI animals showed lasting deficits in RAP and spatial learning/memory through adulthood. The current findings are in accord with evidence that HI injury during different stages of developmental maturity (Age-at-injury) leads to differential neuropathologies, and provide the novel observation that in rats, P1 vs. P7 induced pathologies are associated with different patterns of auditory processing and learning/memory deficits across the lifespan. PMID:16765458

  14. The development of interactive multimedia based on auditory, intellectually, repetition in repetition algorithm learning to increase learning outcome

    NASA Astrophysics Data System (ADS)

    Munir; Sutarno, H.; Aisyah, N. S.

    2018-05-01

    This research aims to find out how the development of interactive multimedia based on auditory, intellectually, and repetition can improve student learning outcomes. This interactive multimedia is developed through 5 stages. Analysis stages include the study of literature, questionnaire, interviews and observations. The design phase is done by the database design, flowchart, storyboards and repetition algorithm material while the development phase is done by the creation of web-based framework. Presentation material is adapted to the model of learning such as auditory, intellectually, repetition. Auditory points are obtained by recording the narrative material that presented by a variety of intellectual points. Multimedia as a product is validated by material and media experts. Implementation phase conducted on grade XI-TKJ2 SMKN 1 Garut. Based on index’s gain, an increasing of student learning outcomes in this study is 0.46 which is fair due to interest of student in using interactive multimedia. While the multimedia assessment earned 84.36% which is categorized as very well.

  15. A role for descending auditory cortical projections in songbird vocal learning

    PubMed Central

    Mandelblat-Cerf, Yael; Las, Liora; Denisenko, Natalia; Fee, Michale S

    2014-01-01

    Many learned motor behaviors are acquired by comparing ongoing behavior with an internal representation of correct performance, rather than using an explicit external reward. For example, juvenile songbirds learn to sing by comparing their song with the memory of a tutor song. At present, the brain regions subserving song evaluation are not known. In this study, we report several findings suggesting that song evaluation involves an avian 'cortical' area previously shown to project to the dopaminergic midbrain and other downstream targets. We find that this ventral portion of the intermediate arcopallium (AIV) receives inputs from auditory cortical areas, and that lesions of AIV result in significant deficits in vocal learning. Additionally, AIV neurons exhibit fast responses to disruptive auditory feedback presented during singing, but not during nonsinging periods. Our findings suggest that auditory cortical areas may guide learning by transmitting song evaluation signals to the dopaminergic midbrain and/or other subcortical targets. DOI: http://dx.doi.org/10.7554/eLife.02152.001 PMID:24935934

  16. Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss.

    PubMed

    McDaniel, Jena; Camarata, Stephen; Yoder, Paul

    2018-05-15

    Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.

  17. Identification of a motor to auditory pathway important for vocal learning

    PubMed Central

    Roberts, Todd F.; Hisey, Erin; Tanaka, Masashi; Kearney, Matthew; Chattree, Gaurav; Yang, Cindy F.; Shah, Nirao M.; Mooney, Richard

    2017-01-01

    Summary Learning to vocalize depends on the ability to adaptively modify the temporal and spectral features of vocal elements. Neurons that convey motor-related signals to the auditory system are theorized to facilitate vocal learning, but the identity and function of such neurons remain unknown. Here we identify a previously unknown neuron type in the songbird brain that transmits vocal motor signals to the auditory cortex. Genetically ablating these neurons in juveniles disrupted their ability to imitate features of an adult tutor’s song. Ablating these neurons in adults had little effect on previously learned songs, but interfered with their ability to adaptively modify the duration of vocal elements and largely prevented the degradation of song’s temporal features normally caused by deafening. These findings identify a motor to auditory circuit essential to vocal imitation and to the adaptive modification of vocal timing. PMID:28504672

  18. Encoding of Discriminative Fear Memory by Input-Specific LTP in the Amygdala.

    PubMed

    Kim, Woong Bin; Cho, Jun-Hyeong

    2017-08-30

    In auditory fear conditioning, experimental subjects learn to associate an auditory conditioned stimulus (CS) with an aversive unconditioned stimulus. With sufficient training, animals fear conditioned to an auditory CS show fear response to the CS, but not to irrelevant auditory stimuli. Although long-term potentiation (LTP) in the lateral amygdala (LA) plays an essential role in auditory fear conditioning, it is unknown whether LTP is induced selectively in the neural pathways conveying specific CS information to the LA in discriminative fear learning. Here, we show that postsynaptically expressed LTP is induced selectively in the CS-specific auditory pathways to the LA in a mouse model of auditory discriminative fear conditioning. Moreover, optogenetically induced depotentiation of the CS-specific auditory pathways to the LA suppressed conditioned fear responses to the CS. Our results suggest that input-specific LTP in the LA contributes to fear memory specificity, enabling adaptive fear responses only to the relevant sensory cue. VIDEO ABSTRACT. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. The Use of Spatialized Speech in Auditory Interfaces for Computer Users Who Are Visually Impaired

    ERIC Educational Resources Information Center

    Sodnik, Jaka; Jakus, Grega; Tomazic, Saso

    2012-01-01

    Introduction: This article reports on a study that explored the benefits and drawbacks of using spatially positioned synthesized speech in auditory interfaces for computer users who are visually impaired (that is, are blind or have low vision). The study was a practical application of such systems--an enhanced word processing application compared…

  20. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning.

    PubMed

    Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.

  1. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning

    PubMed Central

    Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359

  2. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information.

    PubMed

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  3. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information

    PubMed Central

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information. PMID:28634444

  4. The Auditory Verbal Learning Test (Rey AVLT): An Arabic Version

    ERIC Educational Resources Information Center

    Sharoni, Varda; Natur, Nazeh

    2014-01-01

    The goals of this study were to adapt the Rey Auditory Verbal Learning Test (AVLT) into Arabic, to compare recall functioning among age groups (6:0 to 17:11), and to compare gender differences on various memory dimensions (immediate and delayed recall, learning rate, recognition, proactive interferences, and retroactive interferences). This…

  5. Evaluating the Use of Auditory Systems to Improve Performance in Combat Search and Rescue

    DTIC Science & Technology

    2012-03-01

    take advantage of human binaural hearing to present spatial information through auditory stimuli as it would occur in the real world. This allows the...multiple operators unambiguously and in a short amount of time. Spatial audio basics Spatial audio works with human binaural hearing to generate... binaural recordings “sound better” when heard in the same location where the recordings were made. While this appears to be related to the acoustic

  6. Bidirectional Regulation of Innate and Learned Behaviors That Rely on Frequency Discrimination by Cortical Inhibitory Neurons

    PubMed Central

    Aizenberg, Mark; Mwilambwe-Tshilobo, Laetitia; Briguglio, John J.; Natan, Ryan G.; Geffen, Maria N.

    2015-01-01

    The ability to discriminate tones of different frequencies is fundamentally important for everyday hearing. While neurons in the primary auditory cortex (AC) respond differentially to tones of different frequencies, whether and how AC regulates auditory behaviors that rely on frequency discrimination remains poorly understood. Here, we find that the level of activity of inhibitory neurons in AC controls frequency specificity in innate and learned auditory behaviors that rely on frequency discrimination. Photoactivation of parvalbumin-positive interneurons (PVs) improved the ability of the mouse to detect a shift in tone frequency, whereas photosuppression of PVs impaired the performance. Furthermore, photosuppression of PVs during discriminative auditory fear conditioning increased generalization of conditioned response across tone frequencies, whereas PV photoactivation preserved normal specificity of learning. The observed changes in behavioral performance were correlated with bidirectional changes in the magnitude of tone-evoked responses, consistent with predictions of a model of a coupled excitatory-inhibitory cortical network. Direct photoactivation of excitatory neurons, which did not change tone-evoked response magnitude, did not affect behavioral performance in either task. Our results identify a new function for inhibition in the auditory cortex, demonstrating that it can improve or impair acuity of innate and learned auditory behaviors that rely on frequency discrimination. PMID:26629746

  7. Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration.

    PubMed

    Wahn, Basil; König, Peter

    2015-01-01

    Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.

  8. Precise auditory-vocal mirroring in neurons for learned vocal communication.

    PubMed

    Prather, J F; Peters, S; Nowicki, S; Mooney, R

    2008-01-17

    Brain mechanisms for communication must establish a correspondence between sensory and motor codes used to represent the signal. One idea is that this correspondence is established at the level of single neurons that are active when the individual performs a particular gesture or observes a similar gesture performed by another individual. Although neurons that display a precise auditory-vocal correspondence could facilitate vocal communication, they have yet to be identified. Here we report that a certain class of neurons in the swamp sparrow forebrain displays a precise auditory-vocal correspondence. We show that these neurons respond in a temporally precise fashion to auditory presentation of certain note sequences in this songbird's repertoire and to similar note sequences in other birds' songs. These neurons display nearly identical patterns of activity when the bird sings the same sequence, and disrupting auditory feedback does not alter this singing-related activity, indicating it is motor in nature. Furthermore, these neurons innervate striatal structures important for song learning, raising the possibility that singing-related activity in these cells is compared to auditory feedback to guide vocal learning.

  9. Dynamic reconfiguration of human brain functional networks through neurofeedback.

    PubMed

    Haller, Sven; Kopel, Rotem; Jhooti, Permi; Haas, Tanja; Scharnowski, Frank; Lovblad, Karl-Olof; Scheffler, Klaus; Van De Ville, Dimitri

    2013-11-01

    Recent fMRI studies demonstrated that functional connectivity is altered following cognitive tasks (e.g., learning) or due to various neurological disorders. We tested whether real-time fMRI-based neurofeedback can be a tool to voluntarily reconfigure brain network interactions. To disentangle learning-related from regulation-related effects, we first trained participants to voluntarily regulate activity in the auditory cortex (training phase) and subsequently asked participants to exert learned voluntary self-regulation in the absence of feedback (transfer phase without learning). Using independent component analysis (ICA), we found network reconfigurations (increases in functional network connectivity) during the neurofeedback training phase between the auditory target region and (1) the auditory pathway; (2) visual regions related to visual feedback processing; (3) insula related to introspection and self-regulation and (4) working memory and high-level visual attention areas related to cognitive effort. Interestingly, the auditory target region was identified as the hub of the reconfigured functional networks without a-priori assumptions. During the transfer phase, we again found specific functional connectivity reconfiguration between auditory and attention network confirming the specific effect of self-regulation on functional connectivity. Functional connectivity to working memory related networks was no longer altered consistent with the absent demand on working memory. We demonstrate that neurofeedback learning is mediated by widespread changes in functional connectivity. In contrast, applying learned self-regulation involves more limited and specific network changes in an auditory setup intended as a model for tinnitus. Hence, neurofeedback training might be used to promote recovery from neurological disorders that are linked to abnormal patterns of brain connectivity. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Incidental category learning and cognitive load in a multisensory environment across childhood.

    PubMed

    Broadbent, H J; Osborne, T; Rea, M; Peng, A; Mareschal, D; Kirkham, N Z

    2018-06-01

    Multisensory information has been shown to facilitate learning (Bahrick & Lickliter, 2000; Broadbent, White, Mareschal, & Kirkham, 2017; Jordan & Baker, 2011; Shams & Seitz, 2008). However, although research has examined the modulating effect of unisensory and multisensory distractors on multisensory processing, the extent to which a concurrent unisensory or multisensory cognitive load task would interfere with or support multisensory learning remains unclear. This study examined the role of concurrent task modality on incidental category learning in 6- to 10-year-olds. Participants were engaged in a multisensory learning task while also performing either a unisensory (visual or auditory only) or multisensory (audiovisual) concurrent task (CT). We found that engaging in an auditory CT led to poorer performance on incidental category learning compared with an audiovisual or visual CT, across groups. In 6-year-olds, category test performance was at chance in the auditory-only CT condition, suggesting auditory concurrent tasks may interfere with learning in younger children, but the addition of visual information may serve to focus attention. These findings provide novel insight into the use of multisensory concurrent information on incidental learning. Implications for the deployment of multisensory learning tasks within education across development and developmental changes in modality dominance and ability to switch flexibly across modalities are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. Generalization of Auditory Sensory and Cognitive Learning in Typically Developing Children.

    PubMed

    Murphy, Cristina F B; Moore, David R; Schochat, Eliane

    2015-01-01

    Despite the well-established involvement of both sensory ("bottom-up") and cognitive ("top-down") processes in literacy, the extent to which auditory or cognitive (memory or attention) learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported "far-transfer" to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG), memory group (MG), auditory sensory group (SG), placebo group (PG; drawing, painting), and a control, untrained group (CG). Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest), most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention) training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span) within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness), as the PG and CG improved as much as the other trained groups. Further research is required to investigate the effects of various stimuli and lengths of training on the generalization of sensory and cognitive learning to literacy skills.

  12. Different Verbal Learning Strategies in Autism Spectrum Disorder: Evidence from the Rey Auditory Verbal Learning Test

    ERIC Educational Resources Information Center

    Bowler, Dermot M.; Limoges, Elyse; Mottron, Laurent

    2009-01-01

    The Rey Auditory Verbal Learning Test, which requires the free recall of the same list of 15 unrelated words over 5 trials, was administered to 21 high-functioning adolescents and adults with autism spectrum disorder (ASD) and 21 matched typical individuals. The groups showed similar overall levels of free recall, rates of learning over trials and…

  13. Auditory Attentional Control and Selection during Cocktail Party Listening

    PubMed Central

    Hill, Kevin T.

    2010-01-01

    In realistic auditory environments, people rely on both attentional control and attentional selection to extract intelligible signals from a cluttered background. We used functional magnetic resonance imaging to examine auditory attention to natural speech under such high processing-load conditions. Participants attended to a single talker in a group of 3, identified by the target talker's pitch or spatial location. A catch-trial design allowed us to distinguish activity due to top-down control of attention versus attentional selection of bottom-up information in both the spatial and spectral (pitch) feature domains. For attentional control, we found a left-dominant fronto-parietal network with a bias toward spatial processing in dorsal precentral sulcus and superior parietal lobule, and a bias toward pitch in inferior frontal gyrus. During selection of the talker, attention modulated activity in left intraparietal sulcus when using talker location and in bilateral but right-dominant superior temporal sulcus when using talker pitch. We argue that these networks represent the sources and targets of selective attention in rich auditory environments. PMID:19574393

  14. Modulation of auditory spatial attention by visual emotional cues: differential effects of attentional engagement and disengagement for pleasant and unpleasant cues.

    PubMed

    Harrison, Neil R; Woodhouse, Rob

    2016-05-01

    Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.

  15. Research and Studies Directory for Manpower, Personnel, and Training

    DTIC Science & Technology

    1989-05-01

    LOUIS MO 314-889-6805 CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX TANGNEY J AIR FORCE OFFICE OF SCIENTIFIC RESEARCH 202-767-5021 A MODEL FOR...VISUAL ATTENTION AUDITORY PERCEPTION OF COMPLEX SOUNDS CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX EYE MOVEMENTS AND SPATIAL PATTERN VISION EYE

  16. Mathematical disposition of junior high school students viewed from learning styles

    NASA Astrophysics Data System (ADS)

    Putra, Arief Karunia; Budiyono, Slamet, Isnandar

    2017-08-01

    The relevance of this study is the growth of character values for students in Indonesia. Mathematics is a subject that builds the character values for students. It can be seen from the students' confidence in answering mathematics problems, their persistent and resilience in mathematics task. In addition, students have a curiosity in mathematics and appreciate the usefulness of mathematics. In mathematics, it is called a mathematical disposition. One of the factors that can affect students' mathematical disposition is learning style. Each student has a dominant learning style. Three of the most popular ones are visual, auditory, and kinesthetic. The most important uses of learning styles is that it makes it easy for teachers to incorporate them into their teaching. The purpose of this study was to determine which one that gives better mathematical dispositions among students with learning styles of visual, auditory, or kinesthetic. The subjects were 150 students in Sleman regency. Data obtained through questionnaires. Based on data analysis that has been done with benchmark assessment method, it can be concluded that students with visual learning style has a mathematical disposition better than students with auditory and kinesthetic learning styles, while students with kinesthetic learning style has a mathematical disposition better than students with auditory learning style. These results can be used as a reference for students with individual learning styles to improve the mathematical positive disposition in the learning process of mathematics.

  17. Longitudinal auditory learning facilitates auditory cognition as revealed by microstate analysis.

    PubMed

    Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Matthes, Katarina L; Meyer, Martin

    2017-02-01

    The current study investigates cognitive processes as reflected in late auditory-evoked potentials as a function of longitudinal auditory learning. A normal hearing adult sample (n=15) performed an active oddball task at three consecutive time points (TPs) arranged at two week intervals, and during which EEG was recorded. The stimuli comprised of syllables consisting of a natural fricative (/sh/,/s/,/f/) embedded between two /a/ sounds, as well as morphed transitions of the two syllables that served as deviants. Perceptual and cognitive modulations as reflected in the onset and the mean global field power (GFP) of N2b- and P3b-related microstates across four weeks were investigated. We found that the onset of P3b-like microstates, but not N2b-like microstates decreased across TPs, more strongly for difficult deviants leading to similar onsets for difficult and easy stimuli after repeated exposure. The mean GFP of all N2b-like and P3b-like microstates increased more in spectrally strong deviants compared to weak deviants, leading to a distinctive activation for each stimulus after learning. Our results indicate that longitudinal training of auditory-related cognitive mechanisms such as stimulus categorization, attention and memory updating processes are an indispensable part of successful auditory learning. This suggests that future studies should focus on the potential benefits of cognitive processes in auditory training. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  19. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  20. Contrast Enhancement without Transient Map Expansion for Species-Specific Vocalizations in Core Auditory Cortex during Learning

    PubMed Central

    Shepard, Kathryn N.; Chong, Kelly K.

    2016-01-01

    Tonotopic map plasticity in the adult auditory cortex (AC) is a well established and oft-cited measure of auditory associative learning in classical conditioning paradigms. However, its necessity as an enduring memory trace has been debated, especially given a recent finding that the areal expansion of core AC tuned to a newly relevant frequency range may arise only transiently to support auditory learning. This has been reinforced by an ethological paradigm showing that map expansion is not observed for ultrasonic vocalizations (USVs) or for ultrasound frequencies in postweaning dams for whom USVs emitted by pups acquire behavioral relevance. However, whether transient expansion occurs during maternal experience is not known, and could help to reveal the generality of cortical map expansion as a correlate for auditory learning. We thus mapped the auditory cortices of maternal mice at postnatal time points surrounding the peak in pup USV emission, but found no evidence of frequency map expansion for the behaviorally relevant high ultrasound range in AC. Instead, regions tuned to low frequencies outside of the ultrasound range show progressively greater suppression of activity in response to the playback of ultrasounds or pup USVs for maternally experienced animals assessed at their pups’ postnatal day 9 (P9) to P10, or postweaning. This provides new evidence for a lateral-band suppression mechanism elicited by behaviorally meaningful USVs, likely enhancing their population-level signal-to-noise ratio. These results demonstrate that tonotopic map enlargement has limits as a construct for conceptualizing how experience leaves neural memory traces within sensory cortex in the context of ethological auditory learning. PMID:27957529

  1. Beta phase synchronization in the frontal-temporal-cerebellar network during auditory-to-motor rhythm learning.

    PubMed

    Edagawa, Kouki; Kawasaki, Masahiro

    2017-02-22

    Rhythm is an essential element of dancing and music. To investigate the neural mechanisms underlying how rhythm is learned, we recorded electroencephalographic (EEG) data during a rhythm-reproducing task that asked participants to memorize an auditory stimulus and reproduce it via tapping. Based on the behavioral results, we divided the participants into Learning and No-learning groups. EEG analysis showed that error-related negativity (ERN) in the Learning group was larger than in the No-learning group. Time-frequency analysis of the EEG data showed that the beta power in right and left temporal area at the late learning stage was smaller than at the early learning stage in the Learning group. Additionally, the beta power in the temporal and cerebellar areas in the Learning group when learning to reproduce the rhythm were larger than in the No Learning group. Moreover, phase synchronization between frontal and temporal regions and between temporal and cerebellar regions at late stages of learning were larger than at early stages. These results indicate that the frontal-temporal-cerebellar beta neural circuits might be related to auditory-motor rhythm learning.

  2. Auditory Cortical Plasticity Drives Training-Induced Cognitive Changes in Schizophrenia

    PubMed Central

    Dale, Corby L.; Brown, Ethan G.; Fisher, Melissa; Herman, Alexander B.; Dowling, Anne F.; Hinkley, Leighton B.; Subramaniam, Karuna; Nagarajan, Srikantan S.; Vinogradov, Sophia

    2016-01-01

    Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory. PMID:26152668

  3. The Effects of Restricted Peripheral Field-of-View on Spatial Learning while Navigating.

    PubMed

    Barhorst-Cates, Erica M; Rand, Kristina M; Creem-Regehr, Sarah H

    2016-01-01

    Recent work with simulated reductions in visual acuity and contrast sensitivity has found decrements in survey spatial learning as well as increased attentional demands when navigating, compared to performance with normal vision. Given these findings, and previous work showing that peripheral field loss has been associated with impaired mobility and spatial memory for room-sized spaces, we investigated the role of peripheral vision during navigation using a large-scale spatial learning paradigm. First, we aimed to establish the magnitude of spatial memory errors at different levels of field restriction. Second, we tested the hypothesis that navigation under these different levels of restriction would use additional attentional resources. Normally sighted participants walked on novel real-world paths wearing goggles that restricted the field-of-view (FOV) to severe (15°, 10°, 4°, or 0°) or mild angles (60°) and then pointed to remembered target locations using a verbal reporting measure. They completed a concurrent auditory reaction time task throughout each path to measure cognitive load. Only the most severe restrictions (4° and blindfolded) showed impairment in pointing error compared to the mild restriction (within-subjects). The 10° and 4° conditions also showed an increase in reaction time on the secondary attention task, suggesting that navigating with these extreme peripheral field restrictions demands the use of limited cognitive resources. This comparison of different levels of field restriction suggests that although peripheral field loss requires the actor to use more attentional resources while navigating starting at a less extreme level (10°), spatial memory is not negatively affected until the restriction is very severe (4°). These results have implications for understanding of the mechanisms underlying spatial learning during navigation and the approaches that may be taken to develop assistance for navigation with visual impairment.

  4. The Use of Music and Other Forms of Organized Sound as a Therapeutic Intervention for Students with Auditory Processing Disorder: Providing the Best Auditory Experience for Children with Learning Differences

    ERIC Educational Resources Information Center

    Faronii-Butler, Kishasha O.

    2013-01-01

    This auto-ethnographical inquiry used vignettes and interviews to examine the therapeutic use of music and other forms of organized sound in the learning environment of individuals with Central Auditory Processing Disorders. It is an investigation of the traditions of healing with sound vibrations, from its earliest cultural roots in shamanism and…

  5. Influence of Syllable Structure on L2 Auditory Word Learning

    ERIC Educational Resources Information Center

    Hamada, Megumi; Goya, Hideki

    2015-01-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…

  6. The influence of the Japanese waving cat on the joint spatial compatibility effect: A replication and extension of Dolk, Hommel, Prinz, and Liepelt (2013).

    PubMed

    Puffe, Lydia; Dittrich, Kerstin; Klauer, Karl Christoph

    2017-01-01

    In a joint go/no-go Simon task, each of two participants is to respond to one of two non-spatial stimulus features by means of a spatially lateralized response. Stimulus position varies horizontally and responses are faster and more accurate when response side and stimulus position match (compatible trial) than when they mismatch (incompatible trial), defining the social Simon effect or joint spatial compatibility effect. This effect was originally explained in terms of action/task co-representation, assuming that the co-actor's action is automatically co-represented. Recent research by Dolk, Hommel, Prinz, and Liepelt (2013) challenged this account by demonstrating joint spatial compatibility effects in a task-setting in which non-social objects like a Japanese waving cat were present, but no real co-actor. They postulated that every sufficiently salient object induces joint spatial compatibility effects. However, what makes an object sufficiently salient is so far not well defined. To scrutinize this open question, the current study manipulated auditory and/or visual attention-attracting cues of a Japanese waving cat within an auditory (Experiment 1) and a visual joint go/no-go Simon task (Experiment 2). Results revealed that joint spatial compatibility effects only occurred in an auditory Simon task when the cat provided auditory cues while no joint spatial compatibility effects were found in a visual Simon task. This demonstrates that it is not the sufficiently salient object alone that leads to joint spatial compatibility effects but instead, a complex interaction between features of the object and the stimulus material of the joint go/no-go Simon task.

  7. Multisensory guidance of orienting behavior.

    PubMed

    Maier, Joost X; Groh, Jennifer M

    2009-12-01

    We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.

  8. Influence of auditory and audiovisual stimuli on the right-left prevalence effect.

    PubMed

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.

  9. Auditory Spatial Perception: Auditory Localization

    DTIC Science & Technology

    2012-05-01

    cochlear nucleus, TB – trapezoid body, SOC – superior olivary complex, LL – lateral lemniscus, IC – inferior colliculus. Adapted from Aharonson and...Figure 5. Auditory pathways in the central nervous system. LE – left ear, RE – right ear, AN – auditory nerve, CN – cochlear nucleus, TB...fibers leaving the left and right inner ear connect directly to the synaptic inputs of the cochlear nucleus (CN) on the same (ipsilateral) side of

  10. Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete.

    PubMed

    Jarick, Michelle; Stewart, Mark T; Smilek, Daniel; Dixon, Michael J

    2013-01-01

    Time-space synaesthetes "see" time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred "auditory" viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the "preferred" auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009).

  11. Linear multivariate evaluation models for spatial perception of soundscape.

    PubMed

    Deng, Zhiyong; Kang, Jian; Wang, Daiwei; Liu, Aili; Kang, Joe Zhengyu

    2015-11-01

    Soundscape is a sound environment that emphasizes the awareness of auditory perception and social or cultural understandings. The case of spatial perception is significant to soundscape. However, previous studies on the auditory spatial perception of the soundscape environment have been limited. Based on 21 native binaural-recorded soundscape samples and a set of auditory experiments for subjective spatial perception (SSP), a study of the analysis among semantic parameters, the inter-aural-cross-correlation coefficient (IACC), A-weighted-equal sound-pressure-level (L(eq)), dynamic (D), and SSP is introduced to verify the independent effect of each parameter and to re-determine some of their possible relationships. The results show that the more noisiness the audience perceived, the worse spatial awareness they received, while the closer and more directional the sound source image variations, dynamics, and numbers of sound sources in the soundscape are, the better the spatial awareness would be. Thus, the sensations of roughness, sound intensity, transient dynamic, and the values of Leq and IACC have a suitable range for better spatial perception. A better spatial awareness seems to promote the preference slightly for the audience. Finally, setting SSPs as functions of the semantic parameters and Leq-D-IACC, two linear multivariate evaluation models of subjective spatial perception are proposed.

  12. Comparing perceived auditory width to the visual image of a performing ensemble in contrasting bi-modal environmentsa)

    PubMed Central

    Valente, Daniel L.; Braasch, Jonas; Myrbeck, Shane A.

    2012-01-01

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene. PMID:22280585

  13. Results from a National Central Auditory Processing Disorder Service: A Real-World Assessment of Diagnostic Practices and Remediation for Central Auditory Processing Disorder

    PubMed Central

    Cameron, Sharon; Glyde, Helen; Dillon, Harvey; King, Alison; Gillies, Karin

    2015-01-01

    This article describes the development and evaluation of a national service to diagnose and remediate central auditory processing disorder (CAPD). Data were gathered from 38 participating Australian Hearing centers over an 18-month period from 666 individuals age 6, 0 (years, months) to 24, 8 (median 9, 0). A total of 408 clients were diagnosed with either a spatial processing disorder (n = 130), a verbal memory deficit (n = 174), or a binaural integration deficit (n = 104). A hierarchical test protocol was used so not all children were assessed on all tests in the battery. One hundred fifty clients decided to proceed with deficit-specific training (LiSN & Learn or Memory Booster) and/or be fitted with a frequency modulation system. Families were provided with communication strategies targeted to a child's specific listening difficulties and goals. Outcomes were measured using repeat assessment of the relevant diagnostic test, as well as the Client Oriented Scale of Improvement measure and Listening Inventories for Education teacher questionnaire. Group analyses revealed significant improvements postremediation for all training/management options. Individual posttraining performance and results of outcome measures also are discussed. PMID:27587910

  14. "Who" is saying "what"? Brain-based decoding of human voice and speech.

    PubMed

    Formisano, Elia; De Martino, Federico; Bonte, Milene; Goebel, Rainer

    2008-11-07

    Can we decipher speech content ("what" is being said) and speaker identity ("who" is saying it) from observations of brain activity of a listener? Here, we combine functional magnetic resonance imaging with a data-mining algorithm and retrieve what and whom a person is listening to from the neural fingerprints that speech and voice signals elicit in the listener's auditory cortex. These cortical fingerprints are spatially distributed and insensitive to acoustic variations of the input so as to permit the brain-based recognition of learned speech from unknown speakers and of learned voices from previously unheard utterances. Our findings unravel the detailed cortical layout and computational properties of the neural populations at the basis of human speech recognition and speaker identification.

  15. Spatialized audio improves call sign recognition during multi-aircraft control.

    PubMed

    Kim, Sungbin; Miller, Michael E; Rusnock, Christina F; Elshaw, John J

    2018-07-01

    We investigated the impact of a spatialized audio display on response time, workload, and accuracy while monitoring auditory information for relevance. The human ability to differentiate sound direction implies that spatial audio may be used to encode information. Therefore, it is hypothesized that spatial audio cues can be applied to aid differentiation of critical versus noncritical verbal auditory information. We used a human performance model and a laboratory study involving 24 participants to examine the effect of applying a notional, automated parser to present audio in a particular ear depending on information relevance. Operator workload and performance were assessed while subjects listened for and responded to relevant audio cues associated with critical information among additional noncritical information. Encoding relevance through spatial location in a spatial audio display system--as opposed to monophonic, binaural presentation--significantly reduced response time and workload, particularly for noncritical information. Future auditory displays employing spatial cues to indicate relevance have the potential to reduce workload and improve operator performance in similar task domains. Furthermore, these displays have the potential to reduce the dependence of workload and performance on the number of audio cues. Published by Elsevier Ltd.

  16. Profiles of Types of Central Auditory Processing Disorders in Children with Learning Disabilities.

    ERIC Educational Resources Information Center

    Musiek, Frank E.; And Others

    1985-01-01

    The article profiles five cases of children (8-17 years old) with learning disabilities and auditory processing problems. Possible correlations between the presumed etiology and the unique audiological pattern on the central test battery are analyzed. (Author/CL)

  17. Training Humans to Categorize Monkey Calls: Auditory Feature- and Category-Selective Neural Tuning Changes.

    PubMed

    Jiang, Xiong; Chevillet, Mark A; Rauschecker, Josef P; Riesenhuber, Maximilian

    2018-04-18

    Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. The effects of spatially separated call components on phonotaxis in túngara frogs: evidence for auditory grouping.

    PubMed

    Farris, Hamilton E; Rand, A Stanley; Ryan, Michael J

    2002-01-01

    Numerous animals across disparate taxa must identify and locate complex acoustic signals imbedded in multiple overlapping signals and ambient noise. A requirement of this task is the ability to group sounds into auditory streams in which sounds are perceived as emanating from the same source. Although numerous studies over the past 50 years have examined aspects of auditory grouping in humans, surprisingly few assays have demonstrated auditory stream formation or the assignment of multicomponent signals to a single source in non-human animals. In our study, we present evidence for auditory grouping in female túngara frogs. In contrast to humans, in which auditory grouping may be facilitated by the cues produced when sounds arrive from the same location, we show that spatial cues play a limited role in grouping, as females group discrete components of the species' complex call over wide angular separations. Furthermore, we show that once grouped the separate call components are weighted differently in recognizing and locating the call, so called 'what' and 'where' decisions, respectively. Copyright 2002 S. Karger AG, Basel

  19. Auditory and Visual Working Memory Functioning in College Students with Attention-Deficit/Hyperactivity Disorder and/or Learning Disabilities.

    PubMed

    Liebel, Spencer W; Nelson, Jason M

    2017-12-01

    We investigated the auditory and visual working memory functioning in college students with attention-deficit/hyperactivity disorder, learning disabilities, and clinical controls. We examined the role attention-deficit/hyperactivity disorder subtype status played in working memory functioning. The unique influence that both domains of working memory have on reading and math abilities was investigated. A sample of 268 individuals seeking postsecondary education comprise four groups of the present study: 110 had an attention-deficit/hyperactivity disorder diagnosis only, 72 had a learning disability diagnosis only, 35 had comorbid attention-deficit/hyperactivity disorder and learning disability diagnoses, and 60 individuals without either of these disorders comprise a clinical control group. Participants underwent a comprehensive neuropsychological evaluation, and licensed psychologists employed a multi-informant, multi-method approach in obtaining diagnoses. In the attention-deficit/hyperactivity disorder only group, there was no difference between auditory and visual working memory functioning, t(100) = -1.57, p = .12. In the learning disability group, however, auditory working memory functioning was significantly weaker compared with visual working memory, t(71) = -6.19, p < .001, d = -0.85. Within the attention-deficit/hyperactivity disorder only group, there were no auditory or visual working memory functioning differences between participants with either a predominantly inattentive type or a combined type diagnosis. Visual working memory did not incrementally contribute to the prediction of academic achievement skills. Individuals with attention-deficit/hyperactivity disorder did not demonstrate significant working memory differences compared with clinical controls. Individuals with a learning disability demonstrated weaker auditory working memory than individuals in either the attention-deficit/hyperactivity or clinical control groups. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Maturation of Rapid Auditory Temporal Processing and Subsequent Nonword Repetition Performance in Children

    ERIC Educational Resources Information Center

    Fox, Allison M.; Reid, Corinne L.; Anderson, Mike; Richardson, Cassandra; Bishop, Dorothy V. M.

    2012-01-01

    According to the rapid auditory processing theory, the ability to parse incoming auditory information underpins learning of oral and written language. There is wide variation in this low-level perceptual ability, which appears to follow a protracted developmental course. We studied the development of rapid auditory processing using event-related…

  1. Neurotransmitter involvement in development and maintenance of the auditory space map in the guinea pig superior colliculus.

    PubMed

    Ingham, N J; Thornton, S K; McCrossan, D; Withington, D J

    1998-12-01

    Neurotransmitter involvement in development and maintenance of the auditory space map in the guinea pig superior colliculus. J. Neurophysiol. 80: 2941-2953, 1998. The mammalian superior colliculus (SC) is a complex area of the midbrain in terms of anatomy, physiology, and neurochemistry. The SC bears representations of the major sensory modalites integrated with a motor output system. It is implicated with saccade generation, in behavioral responses to novel sensory stimuli and receives innervation from diverse regions of the brain using many neurotransmitter classes. Ethylene-vinyl acetate copolymer (Elvax-40W polymer) was used here to deliver chronically neurotransmitter receptor antagonists to the SC of the guinea pig to investigate the potential role played by the major neurotransmitter systems in the collicular representation of auditory space. Slices of polymer containing different drugs were implanted onto the SC of guinea pigs before the development of the SC azimuthal auditory space map, at approximately 20 days after birth (DAB). A further group of animals was exposed to aminophosphonopentanoic acid (AP5) at approximately 250 DAB. Azimuthal spatial tuning properties of deep layer multiunits of anesthetized guinea pigs were examined approximately 20 days after implantation of the Elvax polymer. Broadband noise bursts were presented to the animals under anechoic, free-field conditions. Neuronal responses were used to construct polar plots representative of the auditory spatial multiunit receptive fields (MURFs). Animals exposed to control polymer could develop a map of auditory space in the SC comparable with that seen in unimplanted normal animals. Exposure of the SC of young animals to AP5, 6-cyano-7-nitroquinoxaline-2,3-dione, or atropine, resulted in a reduction in the proportion of spatially tuned responses with an increase in the proportion of broadly tuned responses and a degradation in topographic order. Thus N-methyl--aspartate (NMDA) and non-NMDA glutamate receptors and muscarinic acetylcholine receptors appear to play vital roles in the development of the SC auditory space map. A group of animals exposed to AP5 beginning at approximately 250 DAB produced results very similar to those obtained in the young group exposed to AP5. Thus NMDA glutamate receptors also seem to be involved in the maintenance of the SC representation of auditory space in the adult guinea pig. Exposure of the SC of young guinea pigs to gamma-aminobutyric acid (GABA) receptor blocking agents produced some but not total disruption of the spatial tuning of auditory MURFs. Receptive fields were large compared with controls, but a significant degree of topographical organization was maintained. GABA receptors may play a role in the development of fine tuning and sharpening of auditory spatial responses in the SC but not necessarily in the generation of topographical order of the these responses.

  2. Simultaneous acquisition of multiple auditory-motor transformations in speech

    PubMed Central

    Rochet-Capellan, Amelie; Ostry, David J.

    2011-01-01

    The brain easily generates the movement that is needed in a given situation. Yet surprisingly, the results of experimental studies suggest that it is difficult to acquire more than one skill at a time. To do so, it has generally been necessary to link the required movement to arbitrary cues. In the present study, we show that speech motor learning provides an informative model for the acquisition of multiple sensorimotor skills. During training, subjects are required to repeat aloud individual words in random order while auditory feedback is altered in real-time in different ways for the different words. We find that subjects can quite readily and simultaneously modify their speech movements to correct for these different auditory transformations. This multiple learning occurs effortlessly without explicit cues and without any apparent awareness of the perturbation. The ability to simultaneously learn several different auditory-motor transformations is consistent with the idea that in speech motor learning, the brain acquires instance specific memories. The results support the hypothesis that speech motor learning is fundamentally local. PMID:21325534

  3. Sound Sequence Discrimination Learning Motivated by Reward Requires Dopaminergic D2 Receptor Activation in the Rat Auditory Cortex

    ERIC Educational Resources Information Center

    Kudoh, Masaharu; Shibuki, Katsuei

    2006-01-01

    We have previously reported that sound sequence discrimination learning requires cholinergic inputs to the auditory cortex (AC) in rats. In that study, reward was used for motivating discrimination behavior in rats. Therefore, dopaminergic inputs mediating reward signals may have an important role in the learning. We tested the possibility in the…

  4. A Latent Consolidation Phase in Auditory Identification Learning: Time in the Awake State Is Sufficient

    ERIC Educational Resources Information Center

    Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi

    2005-01-01

    Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…

  5. The Effects of Attentional Engagement on Route Learning Performance in a Virtual Environment: An Aging Study

    PubMed Central

    Hartmeyer, Steffen; Grzeschik, Ramona; Wolbers, Thomas; Wiener, Jan M.

    2017-01-01

    Route learning is a common navigation task affected by cognitive aging. Here we present a novel experimental paradigm to investigate whether age-related declines in executive control of attention contributes to route learning deficits. A young and an older participant group was repeatedly presented with a route through a virtual maze comprised of 12 decision points (DP) and non-decision points (non-DP). To investigate attentional engagement with the route learning task, participants had to respond to auditory probes at both DP and non-DP. Route knowledge was assessed by showing participants screenshots or landmarks from DPs and non-DPs and asking them to indicate the movement direction required to continue the route. Results demonstrate better performance for DPs than for non-DPs and slower responses to auditory probes at DPs compared to non-DPs. As expected we found slower route learning and slower responses to the auditory probes in the older participant group. Interestingly, differences in response times to the auditory probes between DPs and non-DPs can predict the success of route learning in both age groups and may explain slower knowledge acquisition in the older participant group. PMID:28775689

  6. Matching Teaching and Learning Styles.

    ERIC Educational Resources Information Center

    Caudill, Gil

    1998-01-01

    Outlines three basic learning modalities--auditory, visual, and tactile--and notes that technology can help incorporate multiple modalities within each lesson, to meet the needs of most students. Discusses the importance in multiple modality teaching of effectively assessing students. Presents visual, auditory and tactile activity suggestions.…

  7. Auditory Middle Latency Response and Phonological Awareness in Students with Learning Disabilities

    PubMed Central

    Romero, Ana Carla Leite; Funayama, Carolina Araújo Rodrigues; Capellini, Simone Aparecida; Frizzo, Ana Claudia Figueiredo

    2015-01-01

    Introduction Behavioral tests of auditory processing have been applied in schools and highlight the association between phonological awareness abilities and auditory processing, confirming that low performance on phonological awareness tests may be due to low performance on auditory processing tests. Objective To characterize the auditory middle latency response and the phonological awareness tests and to investigate correlations between responses in a group of children with learning disorders. Methods The study included 25 students with learning disabilities. Phonological awareness and auditory middle latency response were tested with electrodes placed on the left and right hemispheres. The correlation between the measurements was performed using the Spearman rank correlation coefficient. Results There is some correlation between the tests, especially between the Pa component and syllabic awareness, where moderate negative correlation is observed. Conclusion In this study, when phonological awareness subtests were performed, specifically phonemic awareness, the students showed a low score for the age group, although for the objective examination, prolonged Pa latency in the contralateral via was observed. Negative weak to moderate correlation for Pa wave latency was observed, as was positive weak correlation for Na-Pa amplitude. PMID:26491479

  8. Analysis of the critical thinking process of junior high school students in solving geometric problems by utilizing the v-a-k learning styles model

    NASA Astrophysics Data System (ADS)

    Hananto, R. B.; Kusmayadi, T. A.; Riyadi

    2018-05-01

    The research aims to identify the critical thinking process of students in solving geometry problems. The geometry problem selected in this study was the building of flat side room (cube). The critical thinking process was implemented to visual, auditory and kinesthetic learning styles. This research was a descriptive analysis research using qualitative method. The subjects of this research were 3 students selected by purposive sampling consisting of visual, auditory, and kinesthetic learning styles. Data collection was done through test, interview, and observation. The results showed that the students' critical thinking process in identifying and defining steps for each learning style were similar in solving problems. The critical thinking differences were seen in enumerate, analyze, list, and self-correct steps. It was also found that critical thinking process of students with kinesthetic learning style was better than visual and auditory learning styles.

  9. Mapping feature-sensitivity and attentional modulation in human auditory cortex with functional magnetic resonance imaging

    PubMed Central

    Paltoglou, Aspasia E; Sumner, Christian J; Hall, Deborah A

    2011-01-01

    Feature-specific enhancement refers to the process by which selectively attending to a particular stimulus feature specifically increases the response in the same region of the brain that codes that stimulus property. Whereas there are many demonstrations of this mechanism in the visual system, the evidence is less clear in the auditory system. The present functional magnetic resonance imaging (fMRI) study examined this process for two complex sound features, namely frequency modulation (FM) and spatial motion. The experimental design enabled us to investigate whether selectively attending to FM and spatial motion enhanced activity in those auditory cortical areas that were sensitive to the two features. To control for attentional effort, the difficulty of the target-detection tasks was matched as closely as possible within listeners. Locations of FM-related and motion-related activation were broadly compatible with previous research. The results also confirmed a general enhancement across the auditory cortex when either feature was being attended to, as compared with passive listening. The feature-specific effects of selective attention revealed the novel finding of enhancement for the nonspatial (FM) feature, but not for the spatial (motion) feature. However, attention to spatial features also recruited several areas outside the auditory cortex. Further analyses led us to conclude that feature-specific effects of selective attention are not statistically robust, and appear to be sensitive to the choice of fMRI experimental design and localizer contrast. PMID:21447093

  10. Visual abilities are important for auditory-only speech recognition: evidence from autism spectrum disorder.

    PubMed

    Schelinski, Stefanie; Riedel, Philipp; von Kriegstein, Katharina

    2014-12-01

    In auditory-only conditions, for example when we listen to someone on the phone, it is essential to fast and accurately recognize what is said (speech recognition). Previous studies have shown that speech recognition performance in auditory-only conditions is better if the speaker is known not only by voice, but also by face. Here, we tested the hypothesis that such an improvement in auditory-only speech recognition depends on the ability to lip-read. To test this we recruited a group of adults with autism spectrum disorder (ASD), a condition associated with difficulties in lip-reading, and typically developed controls. All participants were trained to identify six speakers by name and voice. Three speakers were learned by a video showing their face and three others were learned in a matched control condition without face. After training, participants performed an auditory-only speech recognition test that consisted of sentences spoken by the trained speakers. As a control condition, the test also included speaker identity recognition on the same auditory material. The results showed that, in the control group, performance in speech recognition was improved for speakers known by face in comparison to speakers learned in the matched control condition without face. The ASD group lacked such a performance benefit. For the ASD group auditory-only speech recognition was even worse for speakers known by face compared to speakers not known by face. In speaker identity recognition, the ASD group performed worse than the control group independent of whether the speakers were learned with or without face. Two additional visual experiments showed that the ASD group performed worse in lip-reading whereas face identity recognition was within the normal range. The findings support the view that auditory-only communication involves specific visual mechanisms. Further, they indicate that in ASD, speaker-specific dynamic visual information is not available to optimize auditory-only speech recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Local inhibition modulates learning-dependent song encoding in the songbird auditory cortex

    PubMed Central

    Thompson, Jason V.; Jeanne, James M.

    2013-01-01

    Changes in inhibition during development are well documented, but the role of inhibition in adult learning-related plasticity is not understood. In songbirds, vocal recognition learning alters the neural representation of songs across the auditory forebrain, including the caudomedial nidopallium (NCM), a region analogous to mammalian secondary auditory cortices. Here, we block local inhibition with the iontophoretic application of gabazine, while simultaneously measuring song-evoked spiking activity in NCM of European starlings trained to recognize sets of conspecific songs. We find that local inhibition differentially suppresses the responses to learned and unfamiliar songs and enhances spike-rate differences between learned categories of songs. These learning-dependent response patterns emerge, in part, through inhibitory modulation of selectivity for song components and the masking of responses to specific acoustic features without altering spectrotemporal tuning. The results describe a novel form of inhibitory modulation of the encoding of learned categories and demonstrate that inhibition plays a central role in shaping the responses of neurons to learned, natural signals. PMID:23155175

  12. An Investigation of Spatial Hearing in Children with Normal Hearing and with Cochlear Implants and the Impact of Executive Function

    NASA Astrophysics Data System (ADS)

    Misurelli, Sara M.

    The ability to analyze an "auditory scene"---that is, to selectively attend to a target source while simultaneously segregating and ignoring distracting information---is one of the most important and complex skills utilized by normal hearing (NH) adults. The NH adult auditory system and brain work rather well to segregate auditory sources in adverse environments. However, for some children and individuals with hearing loss, selectively attending to one source in noisy environments can be extremely challenging. In a normal auditory system, information arriving at each ear is integrated, and thus these binaural cues aid in speech understanding in noise. A growing number of individuals who are deaf now receive cochlear implants (CIs), which supply hearing through electrical stimulation to the auditory nerve. In particular, bilateral cochlear implants (BICIs) are now becoming more prevalent, especially in children. However, because CI sound processing lacks both fine structure cues and coordination between stimulation at the two ears, binaural cues may either be absent or inconsistent. For children with NH and with BiCIs, this difficulty in segregating sources is of particular concern because their learning and development commonly occurs within the context of complex auditory environments. This dissertation intends to explore and understand the ability of children with NH and with BiCIs to function in everyday noisy environments. The goals of this work are to (1) Investigate source segregation abilities in children with NH and with BiCIs; (2) Examine the effect of target-interferer similarity and the benefits of source segregation for children with NH and with BiCIs; (3) Investigate measures of executive function that may predict performance in complex and realistic auditory tasks of source segregation for listeners with NH; and (4) Examine source segregation abilities in NH listeners, from school-age to adults.

  13. EGR-1 Expression in Catecholamine-synthesizing Neurons Reflects Auditory Learning and Correlates with Responses in Auditory Processing Areas.

    PubMed

    Dai, Jennifer B; Chen, Yining; Sakata, Jon T

    2018-05-21

    Distinguishing between familiar and unfamiliar individuals is an important task that shapes the expression of social behavior. As such, identifying the neural populations involved in processing and learning the sensory attributes of individuals is important for understanding mechanisms of behavior. Catecholamine-synthesizing neurons have been implicated in sensory processing, but relatively little is known about their contribution to auditory learning and processing across various vertebrate taxa. Here we investigated the extent to which immediate early gene expression in catecholaminergic circuitry reflects information about the familiarity of social signals and predicts immediate early gene expression in sensory processing areas in songbirds. We found that male zebra finches readily learned to differentiate between familiar and unfamiliar acoustic signals ('songs') and that playback of familiar songs led to fewer catecholaminergic neurons in the locus coeruleus (but not in the ventral tegmental area, substantia nigra, or periaqueductal gray) expressing the immediate early gene, EGR-1, than playback of unfamiliar songs. The pattern of EGR-1 expression in the locus coeruleus was similar to that observed in two auditory processing areas implicated in auditory learning and memory, namely the caudomedial nidopallium (NCM) and the caudal medial mesopallium (CMM), suggesting a contribution of catecholamines to sensory processing. Consistent with this, the pattern of catecholaminergic innervation onto auditory neurons co-varied with the degree to which song playback affected the relative intensity of EGR-1 expression. Together, our data support the contention that catecholamines like norepinephrine contribute to social recognition and the processing of social information. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. Neural correlates of auditory scene analysis and perception

    PubMed Central

    Cohen, Yale E.

    2014-01-01

    The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354

  15. Multichannel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Erbe, T.; Wenzel, E. M. (Principal Investigator)

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  16. Multi-channel spatial auditory display for speech communications

    NASA Astrophysics Data System (ADS)

    Begault, Durand; Erbe, Tom

    1993-10-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  17. Multichannel spatial auditory display for speech communications.

    PubMed

    Begault, D R; Erbe, T

    1994-10-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  18. Multichannel Spatial Auditory Display for Speed Communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Erbe, Tom

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplifiedhead-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degree azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degree azimuth positions.

  19. Multi-channel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand; Erbe, Tom

    1993-01-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  20. Exploring the role of task performance and learning style on prefrontal hemodynamics during a working memory task.

    PubMed

    Anderson, Afrouz A; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Dashtestani, Hadis; Chowdhry, Fatima A; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H

    2018-01-01

    Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing.

  1. Auditory and visual sequence learning in humans and monkeys using an artificial grammar learning paradigm.

    PubMed

    Milne, Alice E; Petkov, Christopher I; Wilson, Benjamin

    2017-07-05

    Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  2. Exploring the role of task performance and learning style on prefrontal hemodynamics during a working memory task

    PubMed Central

    Anderson, Afrouz A.; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Chowdhry, Fatima A.; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H.

    2018-01-01

    Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing. PMID:29870536

  3. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    PubMed

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Decoding Multiple Sound Categories in the Human Temporal Cortex Using High Resolution fMRI

    PubMed Central

    Zhang, Fengqing; Wang, Ji-Ping; Kim, Jieun; Parrish, Todd; Wong, Patrick C. M.

    2015-01-01

    Perception of sound categories is an important aspect of auditory perception. The extent to which the brain’s representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases. PMID:25692885

  5. Decoding multiple sound categories in the human temporal cortex using high resolution fMRI.

    PubMed

    Zhang, Fengqing; Wang, Ji-Ping; Kim, Jieun; Parrish, Todd; Wong, Patrick C M

    2015-01-01

    Perception of sound categories is an important aspect of auditory perception. The extent to which the brain's representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases.

  6. Developmental changes in automatic rule-learning mechanisms across early childhood.

    PubMed

    Mueller, Jutta L; Friederici, Angela D; Männel, Claudia

    2018-06-27

    Infants' ability to learn complex linguistic regularities from early on has been revealed by electrophysiological studies indicating that 3-month-olds, but not adults, can automatically detect non-adjacent dependencies between syllables. While different ERP responses in adults and infants suggest that both linguistic rule learning and its link to basic auditory processing undergo developmental changes, systematic investigations of the developmental trajectories are scarce. In the present study, we assessed 2- and 4-year-olds' ERP indicators of pitch discrimination and linguistic rule learning in a syllable-based oddball design. To test for the relation between auditory discrimination and rule learning, ERP responses to pitch changes were used as predictor for potential linguistic rule-learning effects. Results revealed that 2-year-olds, but not 4-year-olds, showed ERP markers of rule learning. Although, 2-year-olds' rule learning was not dependent on differences in pitch perception, 4-year-old children demonstrated a dependency, such that those children who showed more pronounced responses to pitch changes still showed an effect of rule learning. These results narrow down the developmental decline of the ability for automatic linguistic rule learning to the age between 2 and 4 years, and, moreover, point towards a strong modification of this change by auditory processes. At an age when the ability of automatic linguistic rule learning phases out, rule learning can still be observed in children with enhanced auditory responses. The observed interrelations are plausible causes for age-of-acquisition effects and inter-individual differences in language learning. © 2018 John Wiley & Sons Ltd.

  7. Learning to Encode Timing: Mechanisms of Plasticity in the Auditory Brainstem

    PubMed Central

    Tzounopoulos, Thanos; Kraus, Nina

    2009-01-01

    Mechanisms of plasticity have traditionally been ascribed to higher-order sensory processing areas such as the cortex, whereas early sensory processing centers have been considered largely hard-wired. In agreement with this view, the auditory brainstem has been viewed as a nonplastic site, important for preserving temporal information and minimizing transmission delays. However, recent groundbreaking results from animal models and human studies have revealed remarkable evidence for cellular and behavioral mechanisms for learning and memory in the auditory brainstem. PMID:19477149

  8. The role of auditory feedback in music-supported stroke rehabilitation: A single-blinded randomised controlled intervention.

    PubMed

    van Vugt, F T; Kafczyk, T; Kuhn, W; Rollnik, J D; Tillmann, B; Altenmüller, E

    2016-01-01

    Learning to play musical instruments such as piano was previously shown to benefit post-stroke motor rehabilitation. Previous work hypothesised that the mechanism of this rehabilitation is that patients use auditory feedback to correct their movements and therefore show motor learning. We tested this hypothesis by manipulating the auditory feedback timing in a way that should disrupt such error-based learning. We contrasted a patient group undergoing music-supported therapy on a piano that emits sounds immediately (as in previous studies) with a group whose sounds are presented after a jittered delay. The delay was not noticeable to patients. Thirty-four patients in early stroke rehabilitation with moderate motor impairment and no previous musical background learned to play the piano using simple finger exercises and familiar children's songs. Rehabilitation outcome was not impaired in the jitter group relative to the normal group. Conversely, some clinical tests suggests the jitter group outperformed the normal group. Auditory feedback-based motor learning is not the beneficial mechanism of music-supported therapy. Immediate auditory feedback therapy may be suboptimal. Jittered delay may increase efficacy of the proposed therapy and allow patients to fully benefit from motivational factors of music training. Our study shows a novel way to test hypotheses concerning music training in a single-blinded way, which is an important improvement over existing unblinded tests of music interventions.

  9. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    PubMed Central

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  10. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories.

    PubMed

    Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J

    2015-06-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    PubMed Central

    Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.

    2015-01-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721

  12. Masking Level Difference Response Norms from Learning Disabled Individuals.

    ERIC Educational Resources Information Center

    Waryas, Paul A.; Battin, R. Ray

    1985-01-01

    The study presents normative data on Masking Level Difference (an improvement of the auditory processing of interaural time/intensity differences between signals and masking noises) for 90 learning disabled persons (4-35 years old). It was concluded that the MLD may quickly screen for auditory processing problems. (CL)

  13. Spatial and temporal relationships of electrocorticographic alpha and gamma activity during auditory processing.

    PubMed

    Potes, Cristhian; Brunner, Peter; Gunduz, Aysegul; Knight, Robert T; Schalk, Gerwin

    2014-08-15

    Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8-12 Hz) and high gamma (70-110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15,107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15,107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p<1e(-8)). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these relationships reflect direct cortico-cortical connections rather than common driving input from subcortical structures such as the thalamus. In summary, our inter-subject analyses defined the spatial and temporal relationships between music-related brain activity in the alpha and high gamma bands. They provide experimental evidence supporting current theories about the putative mechanisms of alpha and gamma activity, i.e., reflections of thalamo-cortical interactions and local cortical neural activity, respectively, and the results are also in agreement with existing functional models of auditory processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete

    PubMed Central

    Jarick, Michelle; Stewart, Mark T.; Smilek, Daniel; Dixon, Michael J.

    2013-01-01

    Time-space synaesthetes “see” time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred “auditory” viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the “preferred” auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009). PMID:24137140

  15. The role of spatiotemporal and spectral cues in segregating short sound events: evidence from auditory Ternus display.

    PubMed

    Wang, Qingcui; Bao, Ming; Chen, Lihan

    2014-01-01

    Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three 'within-frame intervals' (WFIs, or intervals between A and B, and between B and C), seven 'inter-frame intervals' (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound 'AB' to 'BC'). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the 'peripheral channeling' theory.

  16. The effects of speech output technology in the learning of graphic symbols.

    PubMed Central

    Schlosser, R W; Belfiore, P J; Nigam, R; Blischak, D; Hetzroni, O

    1995-01-01

    The effects of auditory stimuli in the form of synthetic speech output on the learning of graphic symbols were evaluated. Three adults with severe to profound mental retardation and communication impairments were taught to point to lexigrams when presented with words under two conditions. In the first condition, participants used a voice output communication aid to receive synthetic speech as antecedent and consequent stimuli. In the second condition, with a nonelectronic communications board, participants did not receive synthetic speech. A parallel treatments design was used to evaluate the effects of the synthetic speech output as an added component of the augmentative and alternative communication system. The 3 participants reached criterion when not provided with the auditory stimuli. Although 2 participants also reached criterion when not provided with the auditory stimuli, the addition of auditory stimuli resulted in more efficient learning and a decreased error rate. Maintenance results, however, indicated no differences between conditions. Finding suggest that auditory stimuli in the form of synthetic speech contribute to the efficient acquisition of graphic communication symbols. PMID:14743828

  17. Learning of grammar-like visual sequences by adults with and without language-learning disabilities.

    PubMed

    Aguilar, Jessica M; Plante, Elena

    2014-08-01

    Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. In Study 1, adults with normal language (NL) or language-learning disability (LLD) were familiarized with the visual artificial grammar and then tested using items that conformed or deviated from the grammar. In Study 2, a 2nd sample of adults with NL and LLD were presented auditory word pairs with weak semantic associations (e.g., groom + clean) along with the visual learning task. Participants were instructed to attend to visual sequences and to ignore the auditory stimuli. Incidental encoding of these words would indicate reduced attention to the primary task. In Studies 1 and 2, both groups demonstrated learning and generalization of the artificial grammar. In Study 2, neither the NL nor the LLD group appeared to encode the words presented during the learning phase. The results argue against a general deficit in statistical learning for individuals with LLD and demonstrate that both NL and LLD learners can ignore extraneous auditory stimuli during visual learning.

  18. Effects of hand gestures on auditory learning of second-language vowel length contrasts.

    PubMed

    Hirata, Yukari; Kelly, Spencer D; Huang, Jessica; Manansala, Michael

    2014-12-01

    Research has shown that hand gestures affect comprehension and production of speech at semantic, syntactic, and pragmatic levels for both native language and second language (L2). This study investigated a relatively less explored question: Do hand gestures influence auditory learning of an L2 at the segmental phonology level? To examine auditory learning of phonemic vowel length contrasts in Japanese, 88 native English-speaking participants took an auditory test before and after one of the following 4 types of training in which they (a) observed an instructor in a video speaking Japanese words while she made syllabic-rhythm hand gesture, (b) produced this gesture with the instructor, (c) observed the instructor speaking those words and her moraic-rhythm hand gesture, or (d) produced the moraic-rhythm gesture with the instructor. All of the training types yielded similar auditory improvement in identifying vowel length contrast. However, observing the syllabic-rhythm hand gesture yielded the most balanced improvement between word-initial and word-final vowels and between slow and fast speaking rates. The overall effect of hand gesture on learning of segmental phonology is limited. Implications for theories of hand gesture are discussed in terms of the role it plays at different linguistic levels.

  19. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance.

    PubMed

    Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T

    2012-05-01

    In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.

  20. Perceptual Learning and Auditory Training in Cochlear Implant Recipients

    PubMed Central

    Fu, Qian-Jie; Galvin, John J.

    2007-01-01

    Learning electrically stimulated speech patterns can be a new and difficult experience for cochlear implant (CI) recipients. Recent studies have shown that most implant recipients at least partially adapt to these new patterns via passive, daily-listening experiences. Gradually introducing a speech processor parameter (eg, the degree of spectral mismatch) may provide for more complete and less stressful adaptation. Although the implant device restores hearing sensation and the continued use of the implant provides some degree of adaptation, active auditory rehabilitation may be necessary to maximize the benefit of implantation for CI recipients. Currently, there are scant resources for auditory rehabilitation for adult, postlingually deafened CI recipients. We recently developed a computer-assisted speech-training program to provide the means to conduct auditory rehabilitation at home. The training software targets important acoustic contrasts among speech stimuli, provides auditory and visual feedback, and incorporates progressive training techniques, thereby maintaining recipients’ interest during the auditory training exercises. Our recent studies demonstrate the effectiveness of targeted auditory training in improving CI recipients’ speech and music perception. Provided with an inexpensive and effective auditory training program, CI recipients may find the motivation and momentum to get the most from the implant device. PMID:17709574

  1. Sensorimotor learning in children and adults: Exposure to frequency-altered auditory feedback during speech production.

    PubMed

    Scheerer, N E; Jacobson, D S; Jones, J A

    2016-02-09

    Auditory feedback plays an important role in the acquisition of fluent speech; however, this role may change once speech is acquired and individuals no longer experience persistent developmental changes to the brain and vocal tract. For this reason, we investigated whether the role of auditory feedback in sensorimotor learning differs across children and adult speakers. Participants produced vocalizations while they heard their vocal pitch predictably or unpredictably shifted downward one semitone. The participants' vocal pitches were measured at the beginning of each vocalization, before auditory feedback was available, to assess the extent to which the deviant auditory feedback modified subsequent speech motor commands. Sensorimotor learning was observed in both children and adults, with participants' initial vocal pitch increasing following trials where they were exposed to predictable, but not unpredictable, frequency-altered feedback. Participants' vocal pitch was also measured across each vocalization, to index the extent to which the deviant auditory feedback was used to modify ongoing vocalizations. While both children and adults were found to increase their vocal pitch following predictable and unpredictable changes to their auditory feedback, adults produced larger compensatory responses. The results of the current study demonstrate that both children and adults rapidly integrate information derived from their auditory feedback to modify subsequent speech motor commands. However, these results also demonstrate that children and adults differ in their ability to use auditory feedback to generate compensatory vocal responses during ongoing vocalization. Since vocal variability also differed across the children and adult groups, these results also suggest that compensatory vocal responses to frequency-altered feedback manipulations initiated at vocalization onset may be modulated by vocal variability. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Good Holders, Bad Shufflers: An Examination of Working Memory Processes and Modalities in Children with and without Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Simone, Ashley N; Bédard, Anne-Claude V; Marks, David J; Halperin, Jeffrey M

    2016-01-01

    The aim of this study was to examine working memory (WM) modalities (visual-spatial and auditory-verbal) and processes (maintenance and manipulation) in children with and without attention-deficit/hyperactivity disorder (ADHD). The sample consisted of 63 8-year-old children with ADHD and an age- and sex-matched non-ADHD comparison group (N=51). Auditory-verbal and visual-spatial WM were assessed using the Digit Span and Spatial Span subtests from the Wechsler Intelligence Scale for Children Integrated - Fourth Edition. WM maintenance and manipulation were assessed via forward and backward span indices, respectively. Data were analyzed using a 3-way Group (ADHD vs. non-ADHD)×Modality (Auditory-Verbal vs. Visual-Spatial)×Condition (Forward vs. Backward) Analysis of Variance (ANOVA). Secondary analyses examined differences between Combined and Predominantly Inattentive ADHD presentations. Significant Group×Condition (p=.02) and Group×Modality (p=.03) interactions indicated differentially poorer performance by those with ADHD on backward relative to forward and visual-spatial relative to auditory-verbal tasks, respectively. The 3-way interaction was not significant. Analyses targeting ADHD presentations yielded a significant Group×Condition interaction (p=.009) such that children with ADHD-Predominantly Inattentive Presentation performed differentially poorer on backward relative to forward tasks compared to the children with ADHD-Combined Presentation. Findings indicate a specific pattern of WM weaknesses (i.e., WM manipulation and visual-spatial tasks) for children with ADHD. Furthermore, differential patterns of WM performance were found for children with ADHD-Predominantly Inattentive versus Combined Presentations. (JINS, 2016, 22, 1-11).

  3. Orienting Auditory Spatial Attention Engages Frontal Eye Fields and Medial Occipital Cortex in Congenitally Blind Humans

    PubMed Central

    Garg, Arun; Schwartz, Daniel; Stevens, Alexander A.

    2007-01-01

    What happens in vision related cortical areas when congenitally blind (CB) individuals orient attention to spatial locations? Previous neuroimaging of sighted individuals has found overlapping activation in a network of frontoparietal areas including frontal eye-fields (FEF), during both overt (with eye movement) and covert (without eye movement) shifts of spatial attention. Since voluntary eye movement planning seems irrelevant in CB, their FEF neurons should be recruited for alternative functions if their attentional role in sighted individuals is only due to eye movement planning. Recent neuroimaging of the blind has also reported activation in medial occipital areas, normally associated with visual processing, during a diverse set of non-visual tasks, but their response to attentional shifts remains poorly understood. Here, we used event-related fMRI to explore FEF and medial occipital areas in CB individuals and sighted controls with eyes closed (SC) performing a covert attention orienting task, using endogenous verbal cues and spatialized auditory targets. We found robust stimulus-locked FEF activation of all CB subjects, similar but stronger than in SC, suggesting that FEF plays a role in endogenous orienting of covert spatial attention even in individuals in whom voluntary eye movements are irrelevant. We also found robust activation in bilateral medial occipital cortex in CB but not in SC subjects. The response decreased below baseline following endogenous verbal cues but increased following auditory targets, suggesting that the medial occipital area in CB does not directly engage during cued orienting of attention but may be recruited for processing of spatialized auditory targets. PMID:17397882

  4. Spatial working memory for locations specified by vision and audition: testing the amodality hypothesis.

    PubMed

    Loomis, Jack M; Klatzky, Roberta L; McHugh, Brendan; Giudice, Nicholas A

    2012-08-01

    Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.

  5. Influence of syllable structure on L2 auditory word learning.

    PubMed

    Hamada, Megumi; Goya, Hideki

    2015-04-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a closed-syllable structure and consonant clusters. Two groups of college students (Japanese group, N = 22; and native speakers of English, N = 21) learned paired English pseudowords and pictures. The pseudoword types differed in terms of the syllable structure and consonant clusters (congruent vs. incongruent) and the position of consonant clusters (coda vs. onset). Recall accuracy was higher for the pseudowords in the congruent type and the pseudowords with the coda-consonant clusters. The syllable structure effect was obtained from both participant groups, disconfirming the hypothesized cross-linguistic influence on L2 auditory word learning.

  6. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.

    PubMed

    Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui

    2015-09-01

    Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness < 2 years) and 40 age- and gender-matched hearing controls underwent functional magnetic resonance imaging during a visuo-spatial delayed recognition task that consisted of encoding, maintenance and recognition stages. The early deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may inhibit cross-modal reorganization in early deaf subjects. Granger causality analysis revealed that, compared to the hearing controls, the deaf subjects had an enhanced net causal flow from the frontal eye field to the superior temporal gyrus. These findings indicate that a top-down mechanism may better account for the cross-modal activation of auditory regions in early deaf subjects.See MacSweeney and Cardin (doi:10/1093/awv197) for a scientific commentary on this article. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    PubMed

    Stone, Scott A; Tata, Matthew S

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  8. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality

    PubMed Central

    Tata, Matthew S.

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518

  9. The effect of contextual auditory stimuli on virtual spatial navigation in patients with focal hemispheric lesions.

    PubMed

    Cogné, Mélanie; Knebel, Jean-François; Klinger, Evelyne; Bindschaedler, Claire; Rapin, Pierre-André; Joseph, Pierre-Alain; Clarke, Stephanie

    2018-01-01

    Topographical disorientation is a frequent deficit among patients suffering from brain injury. Spatial navigation can be explored in this population using virtual reality environments, even in the presence of motor or sensory disorders. Furthermore, the positive or negative impact of specific stimuli can be investigated. We studied how auditory stimuli influence the performance of brain-injured patients in a navigational task, using the Virtual Action Planning-Supermarket (VAP-S) with the addition of contextual ("sonar effect" and "name of product") and non-contextual ("periodic randomised noises") auditory stimuli. The study included 22 patients with a first unilateral hemispheric brain lesion and 17 healthy age-matched control subjects. After a software familiarisation, all subjects were tested without auditory stimuli, with a sonar effect or periodic random sounds in a random order, and with the stimulus "name of product". Contextual auditory stimuli improved patient performance more than control group performance. Contextual stimuli benefited most patients with severe executive dysfunction or with severe unilateral neglect. These results indicate that contextual auditory stimuli are useful in the assessment of navigational abilities in brain-damaged patients and that they should be used in rehabilitation paradigms.

  10. Anatomical Substrates of Visual and Auditory Miniature Second-language Learning

    PubMed Central

    Newman-Norlund, Roger D.; Frey, Scott H.; Petitto, Laura-Ann; Grafton, Scott T.

    2007-01-01

    Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca’s area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca’s area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the ‘‘critical period.’’ The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance. PMID:17129186

  11. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    PubMed Central

    Kamke, Marc R.; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945

  12. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.

    PubMed

    Kamke, Marc R; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  13. Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement.

    PubMed

    Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria

    2016-03-01

    Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer.

  14. Interconnected growing self-organizing maps for auditory and semantic acquisition modeling.

    PubMed

    Cao, Mengxue; Li, Aijun; Fang, Qiang; Kaufmann, Emily; Kröger, Bernd J

    2014-01-01

    Based on the incremental nature of knowledge acquisition, in this study we propose a growing self-organizing neural network approach for modeling the acquisition of auditory and semantic categories. We introduce an Interconnected Growing Self-Organizing Maps (I-GSOM) algorithm, which takes associations between auditory information and semantic information into consideration, in this paper. Direct phonetic-semantic association is simulated in order to model the language acquisition in early phases, such as the babbling and imitation stages, in which no phonological representations exist. Based on the I-GSOM algorithm, we conducted experiments using paired acoustic and semantic training data. We use a cyclical reinforcing and reviewing training procedure to model the teaching and learning process between children and their communication partners. A reinforcing-by-link training procedure and a link-forgetting procedure are introduced to model the acquisition of associative relations between auditory and semantic information. Experimental results indicate that (1) I-GSOM has good ability to learn auditory and semantic categories presented within the training data; (2) clear auditory and semantic boundaries can be found in the network representation; (3) cyclical reinforcing and reviewing training leads to a detailed categorization as well as to a detailed clustering, while keeping the clusters that have already been learned and the network structure that has already been developed stable; and (4) reinforcing-by-link training leads to well-perceived auditory-semantic associations. Our I-GSOM model suggests that it is important to associate auditory information with semantic information during language acquisition. Despite its high level of abstraction, our I-GSOM approach can be interpreted as a biologically-inspired neurocomputational model.

  15. Adaptation to stimulus statistics in the perception and neural representation of auditory space.

    PubMed

    Dahmen, Johannes C; Keating, Peter; Nodal, Fernando R; Schulz, Andreas L; King, Andrew J

    2010-06-24

    Sensory systems are known to adapt their coding strategies to the statistics of their environment, but little is still known about the perceptual implications of such adjustments. We investigated how auditory spatial processing adapts to stimulus statistics by presenting human listeners and anesthetized ferrets with noise sequences in which interaural level differences (ILD) rapidly fluctuated according to a Gaussian distribution. The mean of the distribution biased the perceived laterality of a subsequent stimulus, whereas the distribution's variance changed the listeners' spatial sensitivity. The responses of neurons in the inferior colliculus changed in line with these perceptual phenomena. Their ILD preference adjusted to match the stimulus distribution mean, resulting in large shifts in rate-ILD functions, while their gain adapted to the stimulus variance, producing pronounced changes in neural sensitivity. Our findings suggest that processing of auditory space is geared toward emphasizing relative spatial differences rather than the accurate representation of absolute position.

  16. An analysis of mathematical connection ability based on student learning style on visualization auditory kinesthetic (VAK) learning model with self-assessment

    NASA Astrophysics Data System (ADS)

    Apipah, S.; Kartono; Isnarto

    2018-03-01

    This research aims to analyze the quality of VAK learning with self-assessment toward the ability of mathematical connection performed by students and to analyze students’ mathematical connection ability based on learning styles in VAK learning model with self-assessment. This research applies mixed method type with concurrent embedded design. The subject of this research consists of VIII grade students from State Junior High School 9 Semarang who apply visual learning style, auditory learning style, and kinesthetic learning style. The data of learning style is collected by using questionnaires, the data of mathematical connection ability is collected by performing tests, and the data of self-assessment is collected by using assessment sheets. The quality of learning is qualitatively valued from planning stage, realization stage, and valuation stage. The result of mathematical connection ability test is analyzed quantitatively by mean test, conducting completeness test, mean differentiation test, and mean proportional differentiation test. The result of the research shows that VAK learning model results in well-qualified learning regarded from qualitative and quantitative sides. Students with visual learning style perform the highest mathematical connection ability, students with kinesthetic learning style perform average mathematical connection ability, and students with auditory learning style perform the lowest mathematical connection ability.

  17. Measures of Working Memory, Sequence Learning, and Speech Recognition in the Elderly.

    ERIC Educational Resources Information Center

    Humes, Larry E.; Floyd, Shari S.

    2005-01-01

    This study describes the measurement of 2 cognitive functions, working-memory capacity and sequence learning, in 2 groups of listeners: young adults with normal hearing and elderly adults with impaired hearing. The measurement of these 2 cognitive abilities with a unique, nonverbal technique capable of auditory, visual, and auditory-visual…

  18. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    ERIC Educational Resources Information Center

    Casserly, Elizabeth D.; Pisoni, David B.

    2015-01-01

    Purpose: Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method: Experiments 1 (N = 37) and 2 (N =…

  19. Word learning in deaf children with cochlear implants: effects of early auditory experience.

    PubMed

    Houston, Derek M; Stewart, Jessica; Moberly, Aaron; Hollich, George; Miyamoto, Richard T

    2012-05-01

    Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this task at approximately 18 months of age and older. For deaf children, performance on this task was significantly correlated with early auditory experience: Children whose cochlear implants were switched on by 14 months of age or who had relatively more hearing before implantation demonstrated learning in this task, but later implanted profoundly deaf children did not. Performance on this task also correlated with later measures of vocabulary size. Taken together, these findings suggest that early auditory experience facilitates word learning and that the IPLP may be useful for identifying children who may be at high risk for poor vocabulary development. © 2012 Blackwell Publishing Ltd.

  20. Word learning in deaf children with cochlear implants: effects of early auditory experience

    PubMed Central

    Houston, Derek M.; Stewart, Jessica; Moberly, Aaron; Hollich, George; Miyamoto, Richard T.

    2013-01-01

    Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this task at approximately 18 months of age and older. For deaf children, performance on this task was significantly correlated with early auditory experience: Children whose cochlear implants were switched on by 14 months of age or who had relatively more hearing before implantation demonstrated learning in this task, but later implanted profoundly deaf children did not. Performance on this task also correlated with later measures of vocabulary size. Taken together, these findings suggest that early auditory experience facilitates word learning and that the IPLP may be useful for identifying children who may be at high risk for poor vocabulary development. PMID:22490184

  1. Primary Auditory Cortex Regulates Threat Memory Specificity

    ERIC Educational Resources Information Center

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  2. Short-term plasticity in auditory cognition.

    PubMed

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  3. Auditory connections and functions of prefrontal cortex

    PubMed Central

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  4. Auditory middle latency response in children with learning difficulties.

    PubMed

    Frizzo, Ana Claudia Figueiredo; Issac, Myriam Lima; Pontes-Fernandes, Angela Cristina; Menezes, Pedro de Lemos; Funayama, Carolina Araújo Rodrigues

    2012-07-01

     This is an objective laboratory assessment of the central auditory systems of children with learning disabilities.  To examine and determine the properties of the components of the Auditory Middle Latency Response in a sample of children with learning disabilities.  This was a prospective, cross-sectional cohort study with quantitative, descriptive, and exploratory outcomes. We included 50 children aged 8-13 years of both genders with and without learning disorders. Those with disorders of known organic, environmental, or genetic causes were excluded.  The Na, Pa, and Nb waves were identified in all subjects. The ranges of the latency component values were as follows: Na = 9.8-32.3 ms, Pa = 19.0-51.4 ms, Nb = 30.0-64.3 ms (learning disorders group) and Na = 13.2-29.6 ms, Pa = 21.8-42.8 ms, Nb = 28.4-65.8 ms (healthy group). The values of the Na-Pa amplitude ranged from 0.3 to 6.8 ìV (learning disorders group) or 0.2-3.6 ìV (learning disorders group). Upon analysis, the functional characteristics of the groups were distinct: the left hemisphere Nb latency was longer in the study group than in the control group. Peculiarities of the electrophysiological measures were observed in the children with learning disorders. This study has provided information on the Auditory Middle Latency Response and can serve as a reference for other clinical and experimental studies in children with these disorders.

  5. Neural network retuning and neural predictors of learning success associated with cello training.

    PubMed

    Wollman, Indiana; Penhune, Virginia; Segado, Melanie; Carpentier, Thibaut; Zatorre, Robert J

    2018-06-26

    The auditory and motor neural systems are closely intertwined, enabling people to carry out tasks such as playing a musical instrument whose mapping between action and sound is extremely sophisticated. While the dorsal auditory stream has been shown to mediate these audio-motor transformations, little is known about how such mapping emerges with training. Here, we use longitudinal training on a cello as a model for brain plasticity during the acquisition of specific complex skills, including continuous and many-to-one audio-motor mapping, and we investigate individual differences in learning. We trained participants with no musical background to play on a specially designed MRI-compatible cello and scanned them before and after 1 and 4 wk of training. Activation of the auditory-to-motor dorsal cortical stream emerged rapidly during the training and was similarly activated during passive listening and cello performance of trained melodies. This network activation was independent of performance accuracy and therefore appears to be a prerequisite of music playing. In contrast, greater recruitment of regions involved in auditory encoding and motor control over the training was related to better musical proficiency. Additionally, pre-supplementary motor area activity and its connectivity with the auditory cortex during passive listening before training was predictive of final training success, revealing the integrative function of this network in auditory-motor information processing. Together, these results clarify the critical role of the dorsal stream and its interaction with auditory areas in complex audio-motor learning.

  6. Blocking c-Fos Expression Reveals the Role of Auditory Cortex Plasticity in Sound Frequency Discrimination Learning.

    PubMed

    de Hoz, Livia; Gierej, Dorota; Lioudyno, Victoria; Jaworski, Jacek; Blazejczyk, Magda; Cruces-Solís, Hugo; Beroun, Anna; Lebitko, Tomasz; Nikolaev, Tomasz; Knapska, Ewelina; Nelken, Israel; Kaczmarek, Leszek

    2018-05-01

    The behavioral changes that comprise operant learning are associated with plasticity in early sensory cortices as well as with modulation of gene expression, but the connection between the behavioral, electrophysiological, and molecular changes is only partially understood. We specifically manipulated c-Fos expression, a hallmark of learning-induced synaptic plasticity, in auditory cortex of adult mice using a novel approach based on RNA interference. Locally blocking c-Fos expression caused a specific behavioral deficit in a sound discrimination task, in parallel with decreased cortical experience-dependent plasticity, without affecting baseline excitability or basic auditory processing. Thus, c-Fos-dependent experience-dependent cortical plasticity is necessary for frequency discrimination in an operant behavioral task. Our results connect behavioral, molecular and physiological changes and demonstrate a role of c-Fos in experience-dependent plasticity and learning.

  7. Auditory perception and the control of spatially coordinated action of deaf and hearing children.

    PubMed

    Savelsbergh, G J; Netelenbos, J B; Whiting, H T

    1991-03-01

    From birth onwards, auditory stimulation directs and intensifies visual orientation behaviour. In deaf children, by definition, auditory perception cannot take place and cannot, therefore, make a contribution to visual orientation to objects approaching from outside the initial field of view. In experiment 1, a difference in catching ability is demonstrated between deaf and hearing children (10-13 years of age) when the ball approached from the periphery or from outside the field of view. No differences in catching ability between the two groups occurred when the ball approached from within the field of view. A second experiment was conducted in order to determine if differences in catching ability between deaf and hearing children could be attributed to execution of slow orientating movements and/or slow reaction time as a result of the auditory loss. The deaf children showed slower reaction times. No differences were found in movement times between deaf and hearing children. Overall, the findings suggest that a lack of auditory stimulation during development can lead to deficiencies in the coordination of actions such as catching which are both spatially and temporally constrained.

  8. Spatial selective attention in a complex auditory environment such as polyphonic music.

    PubMed

    Saupe, Katja; Koelsch, Stefan; Rübsamen, Rudolf

    2010-01-01

    To investigate the influence of spatial information in auditory scene analysis, polyphonic music (three parts in different timbres) was composed and presented in free field. Each part contained large falling interval jumps in the melody and the task of subjects was to detect these events in one part ("target part") while ignoring the other parts. All parts were either presented from the same location (0 degrees; overlap condition) or from different locations (-28 degrees, 0 degrees, and 28 degrees or -56 degrees, 0 degrees, and 56 degrees in the azimuthal plane), with the target part being presented either at 0 degrees or at one of the right-sided locations. Results showed that spatial separation of 28 degrees was sufficient for a significant improvement in target detection (i.e., in the detection of large interval jumps) compared to the overlap condition, irrespective of the position (frontal or right) of the target part. A larger spatial separation of the parts resulted in further improvements only if the target part was lateralized. These data support the notion of improvement in the suppression of interfering signals with spatial sound source separation. Additionally, the data show that the position of the relevant sound source influences auditory performance.

  9. Attention Cueing and Activity Equally Reduce False Alarm Rate in Visual-Auditory Associative Learning through Improving Memory.

    PubMed

    Nikouei Mahani, Mohammad-Ali; Haghgoo, Hojjat Allah; Azizi, Solmaz; Nili Ahmadabadi, Majid

    2016-01-01

    In our daily life, we continually exploit already learned multisensory associations and form new ones when facing novel situations. Improving our associative learning results in higher cognitive capabilities. We experimentally and computationally studied the learning performance of healthy subjects in a visual-auditory sensory associative learning task across active learning, attention cueing learning, and passive learning modes. According to our results, the learning mode had no significant effect on learning association of congruent pairs. In addition, subjects' performance in learning congruent samples was not correlated with their vigilance score. Nevertheless, vigilance score was significantly correlated with the learning performance of the non-congruent pairs. Moreover, in the last block of the passive learning mode, subjects significantly made more mistakes in taking non-congruent pairs as associated and consciously reported lower confidence. These results indicate that attention and activity equally enhanced visual-auditory associative learning for non-congruent pairs, while false alarm rate in the passive learning mode did not decrease after the second block. We investigated the cause of higher false alarm rate in the passive learning mode by using a computational model, composed of a reinforcement learning module and a memory-decay module. The results suggest that the higher rate of memory decay is the source of making more mistakes and reporting lower confidence in non-congruent pairs in the passive learning mode.

  10. The Impact of Early Visual Deprivation on Spatial Hearing: A Comparison between Totally and Partially Visually Deprived Children

    PubMed Central

    Cappagli, Giulia; Finocchietti, Sara; Cocchi, Elena; Gori, Monica

    2017-01-01

    The specific role of early visual deprivation on spatial hearing is still unclear, mainly due to the difficulty of comparing similar spatial skills at different ages and to the difficulty in recruiting young blind children from birth. In this study, the effects of early visual deprivation on the development of auditory spatial localization have been assessed in a group of seven 3–5 years old children with congenital blindness (n = 2; light perception or no perception of light) or low vision (n = 5; visual acuity range 1.1–1.7 LogMAR), with the main aim to understand if visual experience is fundamental to the development of specific spatial skills. Our study led to three main findings: firstly, totally blind children performed overall more poorly compared sighted and low vision children in all the spatial tasks performed; secondly, low vision children performed equally or better than sighted children in the same auditory spatial tasks; thirdly, higher residual levels of visual acuity are positively correlated with better spatial performance in the dynamic condition of the auditory localization task indicating that the more residual vision the better spatial performance. These results suggest that early visual experience has an important role in the development of spatial cognition, even when the visual input during the critical period of visual calibration is partially degraded like in the case of low vision children. Overall these results shed light on the importance of early assessment of spatial impairments in visually impaired children and early intervention to prevent the risk of isolation and social exclusion. PMID:28443040

  11. Spatial Disorientation Training - Demonstration and Avoidance (entrainement a la desorientation spatiale - Demonstration et reponse)

    DTIC Science & Technology

    2008-10-01

    INTRODUCTION RTO-TR-HFM-118 1 - 7 1.2.1.2 Acoustic HUD A three-dimensional auditory display presents sound from arbitrary directions spanning a...through the visual channel, use of the auditory channel shortens reaction times and is expected to reduce pilot workload, thus improving the overall...of vestibular stimulation using a rotating chair or suitable disorientation device to provide each student with a personal experience of some of

  12. The ventriloquist in periphery: impact of eccentricity-related reliability on audio-visual localization.

    PubMed

    Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier

    2013-10-28

    The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.

  13. Peripheral hearing loss reduces the ability of children to direct selective attention during multi-talker listening.

    PubMed

    Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin

    2017-07-01

    Restoring normal hearing requires knowledge of how peripheral and central auditory processes are affected by hearing loss. Previous research has focussed primarily on peripheral changes following sensorineural hearing loss, whereas consequences for central auditory processing have received less attention. We examined the ability of hearing-impaired children to direct auditory attention to a voice of interest (based on the talker's spatial location or gender) in the presence of a common form of background noise: the voices of competing talkers (i.e. during multi-talker, or "Cocktail Party" listening). We measured brain activity using electro-encephalography (EEG) when children prepared to direct attention to the spatial location or gender of an upcoming target talker who spoke in a mixture of three talkers. Compared to normally-hearing children, hearing-impaired children showed significantly less evidence of preparatory brain activity when required to direct spatial attention. This finding is consistent with the idea that hearing-impaired children have a reduced ability to prepare spatial attention for an upcoming talker. Moreover, preparatory brain activity was not restored when hearing-impaired children listened with their acoustic hearing aids. An implication of these findings is that steps to improve auditory attention alongside acoustic hearing aids may be required to improve the ability of hearing-impaired children to understand speech in the presence of competing talkers. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Learning Auditory Discrimination with Computer-Assisted Instruction: A Comparison of Two Different Performance Objectives.

    ERIC Educational Resources Information Center

    Steinhaus, Kurt A.

    A 12-week study of two groups of 14 college freshmen music majors was conducted to determine which group demonstrated greater achievement in learning auditory discrimination using computer-assisted instruction (CAI). The method employed was a pre-/post-test experimental design using subjects randomly assigned to a control group or an experimental…

  15. The Effects of Sound-Field Amplification on Children with Hearing Impairment and Other Diagnoses in Preschool and Primary Classes

    ERIC Educational Resources Information Center

    Furno, Lois Ehrler

    2012-01-01

    Effective learning occurs in auditory environments. Background noise is inherent to classrooms with recommended levels 15 decibels softer than instruction, which is rarely achieved. Learning is diminished by interference to the auditory reception of information, especially for students who are hard of hearing other diagnoses. Sound-field…

  16. The Development of Visual and Auditory Selective Attention Using the Central-Incidental Paradigm.

    ERIC Educational Resources Information Center

    Conroy, Robert L.; Weener, Paul

    Analogous auditory and visual central-incidental learning tasks were administered to 24 students from each of the second, fourth, and sixth grades. The visual tasks served as another modification of Hagen's central-incidental learning paradigm, with the interpretation that focal attention processes continue to develop until the age of 12 or 13…

  17. Auditory Training for Experienced and Inexperienced Second-Language Learners: Native French Speakers Learning English Vowels

    ERIC Educational Resources Information Center

    Iverson, Paul; Pinet, Melanie; Evans, Bronwen G.

    2012-01-01

    This study examined whether high-variability auditory training on natural speech can benefit experienced second-language English speakers who already are exposed to natural variability in their daily use of English. The subjects were native French speakers who had learned English in school; experienced listeners were tested in England and the less…

  18. Skills for Academic Improvement: A Guide for How-to-Study Counselors.

    DTIC Science & Technology

    1982-06-01

    auditory neurological system beyond the car. Auditory perception consists of essentially eight components, which are: 1. Auditory attention ...34daydreaming" or difficulty following lectures in different classes may be an indication of problems with auditory attention . 2. Sound localization...says. The counselor must listen not only attentively to what the cadet says, but must learn to listen perceptively for what the cadet really means. The

  19. Single-trial classification of auditory event-related potentials elicited by stimuli from different spatial directions.

    PubMed

    Cabrera, Alvaro Fuentes; Hoffmann, Pablo Faundez

    2010-01-01

    This study is focused on the single-trial classification of auditory event-related potentials elicited by sound stimuli from different spatial directions. Five naϊve subjects were asked to localize a sound stimulus reproduced over one of 8 loudspeakers placed in a circular array, equally spaced by 45°. The subject was seating in the center of the circular array. Due to the complexity of an eight classes classification, our approach consisted on feeding our classifier with two classes, or spatial directions, at the time. The seven chosen pairs were 0°, which was the loudspeaker directly in front of the subject, with all the other seven directions. The discrete wavelet transform was used to extract features in the time-frequency domain and a support vector machine performed the classification procedure. The average accuracy over all subjects and all pair of spatial directions was 76.5%, σ = 3.6. The results of this study provide evidence that the direction of a sound is encoded in single-trial auditory event-related potentials.

  20. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture

    PubMed Central

    2017-01-01

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238

  1. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    PubMed

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.

  2. Auditory Signal Processing in Communication: Perception and Performance of Vocal Sounds

    PubMed Central

    Prather, Jonathan F.

    2013-01-01

    Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. PMID:23827717

  3. Auditory-visual integration modulates location-specific repetition suppression of auditory responses.

    PubMed

    Shrem, Talia; Murray, Micah M; Deouell, Leon Y

    2017-11-01

    Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.

  4. Auditory Discrimination Learning: Role of Working Memory.

    PubMed

    Zhang, Yu-Xuan; Moore, David R; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal

    2016-01-01

    Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.

  5. Unconscious improvement in foreign language learning using mismatch negativity neurofeedback: A preliminary study.

    PubMed

    Chang, Ming; Iizuka, Hiroyuki; Kashioka, Hideki; Naruse, Yasushi; Furukawa, Masahiro; Ando, Hideyuki; Maeda, Taro

    2017-01-01

    When people learn foreign languages, they find it difficult to perceive speech sounds that are nonexistent in their native language, and extensive training is consequently necessary. Our previous studies have shown that by using neurofeedback based on the mismatch negativity event-related brain potential, participants could unconsciously achieve learning in the auditory discrimination of pure tones that could not be consciously discriminated without the neurofeedback. Here, we examined whether mismatch negativity neurofeedback is effective for helping someone to perceive new speech sounds in foreign language learning. We developed a task for training native Japanese speakers to discriminate between 'l' and 'r' sounds in English, as they usually cannot discriminate between these two sounds. Without participants attending to auditory stimuli or being aware of the nature of the experiment, neurofeedback training helped them to achieve significant improvement in unconscious auditory discrimination and recognition of the target words 'light' and 'right'. There was also improvement in the recognition of other words containing 'l' and 'r' (e.g., 'blight' and 'bright'), even though these words had not been presented during training. This method could be used to facilitate foreign language learning and can be extended to other fields of auditory and clinical research and even other senses.

  6. Unconscious improvement in foreign language learning using mismatch negativity neurofeedback: A preliminary study

    PubMed Central

    Iizuka, Hiroyuki; Kashioka, Hideki; Naruse, Yasushi; Furukawa, Masahiro; Ando, Hideyuki; Maeda, Taro

    2017-01-01

    When people learn foreign languages, they find it difficult to perceive speech sounds that are nonexistent in their native language, and extensive training is consequently necessary. Our previous studies have shown that by using neurofeedback based on the mismatch negativity event-related brain potential, participants could unconsciously achieve learning in the auditory discrimination of pure tones that could not be consciously discriminated without the neurofeedback. Here, we examined whether mismatch negativity neurofeedback is effective for helping someone to perceive new speech sounds in foreign language learning. We developed a task for training native Japanese speakers to discriminate between ‘l’ and ‘r’ sounds in English, as they usually cannot discriminate between these two sounds. Without participants attending to auditory stimuli or being aware of the nature of the experiment, neurofeedback training helped them to achieve significant improvement in unconscious auditory discrimination and recognition of the target words ‘light’ and ‘right’. There was also improvement in the recognition of other words containing ‘l’ and ‘r’ (e.g., ‘blight’ and ‘bright’), even though these words had not been presented during training. This method could be used to facilitate foreign language learning and can be extended to other fields of auditory and clinical research and even other senses. PMID:28617861

  7. Auditory Discrimination Learning: Role of Working Memory

    PubMed Central

    Zhang, Yu-Xuan; Moore, David R.; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal

    2016-01-01

    Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience. PMID:26799068

  8. Tuning Shifts of the Auditory System By Corticocortical and Corticofugal Projections and Conditioning

    PubMed Central

    Suga, Nobuo

    2011-01-01

    The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and non-lemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascending system. The corticocortical and corticofugal projections modulate auditory signal processing and play an essential role in the plasticity of the auditory system. Focal electric stimulation -- comparable to repetitive tonal stimulation -- of the lemniscal system evokes three major types of changes in the physiological properties, such as the tuning to specific values of acoustic parameters of cortical and subcortical auditory neurons through different combinations of facilitation and inhibition. For such changes, a neuromodulator, acetylcholine, plays an essential role. Electric stimulation of the nonlemniscal system evokes changes in the lemniscal system that is different from those evoked by the lemniscal stimulation. Auditory signals ascending from the lemniscal and nonlemniscal thalamic nuclei to the cortical auditory areas appear to be selected or adjusted by a “differential” gating mechanism. Conditioning for associative learning and pseudo-conditioning for nonassociative learning respectively elicit tone-specific and nonspecific plastic changes. The lemniscal, corticofugal and cholinergic systems are involved in eliciting the former, but not the latter. The current article reviews the recent progress in the research of corticocortical and corticofugal modulations of the auditory system and its plasticity elicited by conditioning and pseudo-conditioning. PMID:22155273

  9. Engineering Data Compendium. Human Perception and Performance. Volume 2

    DTIC Science & Technology

    1988-01-01

    Stimulation 5.1014 5.1004 Auditory Detection in the Presence of Visual Stimulation 5.1015 5.1005 Tactual Detection and Discrimination in the Presence of...Accessory Stimulation 5.1016 5.1006 Tactile Versus Auditory Localization of Sound 5.1007 Spatial Localization in the Presence of Inter- 5.1017...York: Wiley. Cross References 5.1004 Auditory detection in the presence of visual stimulation ; 5.1005 Tactual detection and dis- crimination in

  10. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    PubMed Central

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-01-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli. PMID:27853290

  11. Imaging auditory representations of song and syllables in populations of sensorimotor neurons essential to vocal communication.

    PubMed

    Peh, Wendy Y X; Roberts, Todd F; Mooney, Richard

    2015-04-08

    Vocal communication depends on the coordinated activity of sensorimotor neurons important to vocal perception and production. How vocalizations are represented by spatiotemporal activity patterns in these neuronal populations remains poorly understood. Here we combined intracellular recordings and two-photon calcium imaging in anesthetized adult zebra finches (Taeniopygia guttata) to examine how learned birdsong and its component syllables are represented in identified projection neurons (PNs) within HVC, a sensorimotor region important for song perception and production. These experiments show that neighboring HVC PNs can respond at markedly different times to song playback and that different syllables activate spatially intermingled PNs within a local (~100 μm) region of HVC. Moreover, noise correlations were stronger between PNs that responded most strongly to the same syllable and were spatially graded within and between classes of PNs. These findings support a model in which syllabic and temporal features of song are represented by spatially intermingled PNs functionally organized into cell- and syllable-type networks within local spatial scales in HVC. Copyright © 2015 the authors 0270-6474/15/355589-17$15.00/0.

  12. Learning style-based teaching harvests a superior comprehension of respiratory physiology.

    PubMed

    Anbarasi, M; Rajkumar, G; Krishnakumar, S; Rajendran, P; Venkatesan, R; Dinesh, T; Mohan, J; Venkidusamy, S

    2015-09-01

    Students entering medical college generally show vast diversity in their school education. It becomes the responsibility of teachers to motivate students and meet the needs of all diversities. One such measure is teaching students in their own preferred learning style. The present study was aimed to incorporate a learning style-based teaching-learning program for medical students and to reveal its significance and utility. Learning styles of students were assessed online using the visual-auditory-kinesthetic (VAK) learning style self-assessment questionnaire. When respiratory physiology was taught, students were divided into three groups, namely, visual (n = 34), auditory (n = 44), and kinesthetic (n = 28), based on their learning style. A fourth group (the traditional group; n = 40) was formed by choosing students randomly from the above three groups. Visual, auditory, and kinesthetic groups were taught following the appropriate teaching-learning strategies. The traditional group was taught via the routine didactic lecture method. The effectiveness of this intervention was evaluated by a pretest and two posttests, posttest 1 immediately after the intervention and posttest 2 after a month. In posttest 1, one-way ANOVA showed a significant statistical difference (P=0.005). Post hoc analysis showed significance between the kinesthetic group and traditional group (P=0.002). One-way ANOVA showed a significant difference in posttest 2 scores (P < 0.0001). Post hoc analysis showed significance between the three learning style-based groups compared with the traditional group [visual vs. traditional groups (p=0.002), auditory vs. traditional groups (p=0.03), and Kinesthetic vs. traditional groups (p=0.001)]. This study emphasizes that teaching methods tailored to students' style of learning definitely improve their understanding, performance, and retrieval of the subject. Copyright © 2015 The American Physiological Society.

  13. Learning, neural plasticity and sensitive periods: implications for language acquisition, music training and transfer across the lifespan

    PubMed Central

    White, Erin J.; Hutka, Stefanie A.; Williams, Lynne J.; Moreno, Sylvain

    2013-01-01

    Sensitive periods in human development have often been proposed to explain age-related differences in the attainment of a number of skills, such as a second language (L2) and musical expertise. It is difficult to reconcile the negative consequence this traditional view entails for learning after a sensitive period with our current understanding of the brain’s ability for experience-dependent plasticity across the lifespan. What is needed is a better understanding of the mechanisms underlying auditory learning and plasticity at different points in development. Drawing on research in language development and music training, this review examines not only what we learn and when we learn it, but also how learning occurs at different ages. First, we discuss differences in the mechanism of learning and plasticity during and after a sensitive period by examining how language exposure versus training forms language-specific phonetic representations in infants and adult L2 learners, respectively. Second, we examine the impact of musical training that begins at different ages on behavioral and neural indices of auditory and motor processing as well as sensorimotor integration. Third, we examine the extent to which childhood training in one auditory domain can enhance processing in another domain via the transfer of learning between shared neuro-cognitive systems. Specifically, we review evidence for a potential bi-directional transfer of skills between music and language by examining how speaking a tonal language may enhance music processing and, conversely, how early music training can enhance language processing. We conclude with a discussion of the role of attention in auditory learning for learning during and after sensitive periods and outline avenues of future research. PMID:24312022

  14. The Diagnosis and Management of Auditory Processing Disorder

    ERIC Educational Resources Information Center

    Moore, David R.

    2011-01-01

    Purpose: To provide a personal perspective on auditory processing disorder (APD), with reference to the recent clinical forum on APD and the needs of clinical speech-language pathologists and audiologists. Method: The Medical Research Council-Institute of Hearing Research (MRC-IHR) has been engaged in research into APD and auditory learning for 8…

  15. Auditory rhythmic cueing in movement rehabilitation: findings and possible mechanisms

    PubMed Central

    Schaefer, Rebecca S.

    2014-01-01

    Moving to music is intuitive and spontaneous, and music is widely used to support movement, most commonly during exercise. Auditory cues are increasingly also used in the rehabilitation of disordered movement, by aligning actions to sounds such as a metronome or music. Here, the effect of rhythmic auditory cueing on movement is discussed and representative findings of cued movement rehabilitation are considered for several movement disorders, specifically post-stroke motor impairment, Parkinson's disease and Huntington's disease. There are multiple explanations for the efficacy of cued movement practice. Potentially relevant, non-mutually exclusive mechanisms include the acceleration of learning; qualitatively different motor learning owing to an auditory context; effects of increased temporal skills through rhythmic practices and motivational aspects of musical rhythm. Further considerations of rehabilitation paradigm efficacy focus on specific movement disorders, intervention methods and complexity of the auditory cues. Although clinical interventions using rhythmic auditory cueing do not show consistently positive results, it is argued that internal mechanisms of temporal prediction and tracking are crucial, and further research may inform rehabilitation practice to increase intervention efficacy. PMID:25385780

  16. Sound-Making Actions Lead to Immediate Plastic Changes of Neuromagnetic Evoked Responses and Induced β-Band Oscillations during Perception.

    PubMed

    Ross, Bernhard; Barat, Masihullah; Fujioka, Takako

    2017-06-14

    Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as talking and singing or playing a musical instrument. Moreover, neural oscillations at β-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (7 female, 12 male) participated in three magnetoencephalographic recordings while first passively listening to recorded sounds of a bell ringing, then actively striking the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared with the initial naive listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of β-band oscillations, as well as θ coherence between auditory and sensorimotor cortices, was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a key press. We propose that P2 characterizes familiarity with sound objects, whereas β-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning. SIGNIFICANCE STATEMENT While suppression of auditory responses to self-generated sounds is well known, it is not clear whether the learned action-sound association modifies subsequent perception. Our study demonstrated the immediate effects of sound-making experience on perception using magnetoencephalographic recordings, as reflected in the increased auditory evoked P2 wave, increased responsiveness of β oscillations, and enhanced connectivity between auditory and sensorimotor cortices. The importance of motor learning was underscored as the changes were much smaller in a control group using a key press to generate the sounds instead of learning to play the musical instrument. The results support the rapid integration of a feedforward model during perception and provide a neurophysiological basis for the application of music making in motor rehabilitation training. Copyright © 2017 the authors 0270-6474/17/375948-12$15.00/0.

  17. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    PubMed

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. EXEL; Experience for Children in Learning. Parent-Directed Activities to Develop: Oral Expression, Visual Discrimination, Auditory Discrimination, Motor Coordination.

    ERIC Educational Resources Information Center

    Behrmann, Polly; Millman, Joan

    The activities collected in this handbook are planned for parents to use with their children in a learning experience. They can also be used in the classroom. Sections contain games designed to develop visual discrimination, auditory discrimination, motor coordination and oral expression. An objective is given for each game, and directions for…

  19. How Does the Linguistic Distance between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances during Verbal Memory Examination

    ERIC Educational Resources Information Center

    Taha, Haitham

    2017-01-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…

  20. A Study of the Role of Central Auditory Processing in Learning Disabilities: A Prospectus Submitted to the Department of Speech.

    ERIC Educational Resources Information Center

    Murray, Hugh

    Proposed is a study to evaluate the auditory systems of learning disabled (LD) students with a new audiological, diagnostic, stimulus apparatus which is capable of objectively measuring the interaction of the binaural aspects of hearing. The author points out problems with LD definitions that exclude neurological disorders. The detection of…

  1. The Effect of Auditory Integration Training on the Working Memory of Adults with Different Learning Preferences

    ERIC Educational Resources Information Center

    Ryan, Tamara E.

    2014-01-01

    The purpose of this study was to determine the effects of auditory integration training (AIT) on a component of the executive function of working memory; specifically, to determine if learning preferences might have an interaction with AIT to increase the outcome for some learners. The question asked by this quantitative pretest posttest design is…

  2. Intra-amygdala microinfusion of IL-6 impairs the auditory fear conditioning of rats via JAK/STAT activation.

    PubMed

    Hao, Yongxin; Jing, He; Bi, Qiang; Zhang, Jiaozhen; Qin, Ling; Yang, Pingting

    2014-12-15

    Though accumulating literature implicates that cytokines are involved in the pathophysiology of mental disorders, the role of interleukin-6 (IL-6) in learning and memory functions remains unresolved. The present study was undertaken to investigate the effect of IL-6 on amygdala-dependent fear learning. Adult Wistar rats were used along with the auditory fear conditioning test and pharmacological techniques. The data showed that infusions of IL-6, aimed at the amygdala, dose-dependently impaired the acquisition and extinction of conditioned fear. In addition, the results in the Western blot analysis confirmed that JAK/STAT was temporally activated-phosphorylated by the IL-6 treatment. Moreover, the rats were treated with JSI-124, a JAK/STAT3 inhibitor, prior to the IL-6 treatment showed a significant decrease in the IL-6 induced impairments of fear conditioning. Taken together, our results demonstrate that the learning behavior of rats in the auditory fear conditioning could be modulated by IL-6 via the amygdala. Furthermore, the JAK/STAT3 activation in the amygdala seemed to play a role in the IL-6 mediated behavioral alterations of rats in auditory fear learning. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types

    PubMed Central

    Chiou, Shiau-Chuen; Chang, Erik Chihhung

    2016-01-01

    Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential “guidance effect” between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination. PMID:26895286

  4. Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types.

    PubMed

    Chiou, Shiau-Chuen; Chang, Erik Chihhung

    2016-01-01

    Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential "guidance effect" between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination.

  5. Auditory middle latency response in children with learning difficulties

    PubMed Central

    Frizzo, Ana Claudia Figueiredo; Issac, Myriam Lima; Pontes-Fernandes, Angela Cristina; Menezes, Pedro de Lemos; Funayama, Carolina Araújo Rodrigues

    2012-01-01

    Summary Introduction: This is an objective laboratory assessment of the central auditory systems of children with learning disabilities. Aim: To examine and determine the properties of the components of the Auditory Middle Latency Response in a sample of children with learning disabilities. Methods: This was a prospective, cross-sectional cohort study with quantitative, descriptive, and exploratory outcomes. We included 50 children aged 8–13 years of both genders with and without learning disorders. Those with disorders of known organic, environmental, or genetic causes were excluded. Results and Conclusions: The Na, Pa, and Nb waves were identified in all subjects. The ranges of the latency component values were as follows: Na = 9.8–32.3 ms, Pa = 19.0–51.4 ms, Nb = 30.0–64.3 ms (learning disorders group) and Na = 13.2–29.6 ms, Pa = 21.8–42.8 ms, Nb = 28.4–65.8 ms (healthy group). The values of the Na-Pa amplitude ranged from 0.3 to 6.8 ìV (learning disorders group) or 0.2–3.6 ìV (learning disorders group). Upon analysis, the functional characteristics of the groups were distinct: the left hemisphere Nb latency was longer in the study group than in the control group. Peculiarities of the electrophysiological measures were observed in the children with learning disorders. This study has provided information on the Auditory Middle Latency Response and can serve as a reference for other clinical and experimental studies in children with these disorders. PMID:25991954

  6. Sex differences in the representation of call stimuli in a songbird secondary auditory area

    PubMed Central

    Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine

    2015-01-01

    Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the bird's auditory experience in females. PMID:26578918

  7. Sex differences in the representation of call stimuli in a songbird secondary auditory area.

    PubMed

    Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine

    2015-01-01

    Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the bird's auditory experience in females.

  8. Types of Learning Problems

    MedlinePlus

    ... Dyscalculia is defined as difficulty performing mathematical calculations. Math is problematic for many students, but dyscalculia may prevent a teenager from grasping even basic math concepts. Auditory Memory and Processing Disabilities Auditory memory ...

  9. Improving visual spatial working memory in younger and older adults: effects of cross-modal cues.

    PubMed

    Curtis, Ashley F; Turner, Gary R; Park, Norman W; Murtha, Susan J E

    2017-11-06

    Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age.  Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.

  10. Switching auditory attention using spatial and non-spatial features recruits different cortical networks.

    PubMed

    Larson, Eric; Lee, Adrian K C

    2014-01-01

    Switching attention between different stimuli of interest based on particular task demands is important in many everyday settings. In audition in particular, switching attention between different speakers of interest that are talking concurrently is often necessary for effective communication. Recently, it has been shown by multiple studies that auditory selective attention suppresses the representation of unwanted streams in auditory cortical areas in favor of the target stream of interest. However, the neural processing that guides this selective attention process is not well understood. Here we investigated the cortical mechanisms involved in switching attention based on two different types of auditory features. By combining magneto- and electro-encephalography (M-EEG) with an anatomical MRI constraint, we examined the cortical dynamics involved in switching auditory attention based on either spatial or pitch features. We designed a paradigm where listeners were cued in the beginning of each trial to switch or maintain attention halfway through the presentation of concurrent target and masker streams. By allowing listeners time to switch during a gap in the continuous target and masker stimuli, we were able to isolate the mechanisms involved in endogenous, top-down attention switching. Our results show a double dissociation between the involvement of right temporoparietal junction (RTPJ) and the left inferior parietal supramarginal part (LIPSP) in tasks requiring listeners to switch attention based on space and pitch features, respectively, suggesting that switching attention based on these features involves at least partially separate processes or behavioral strategies. © 2013 Elsevier Inc. All rights reserved.

  11. Auditory and visual interactions between the superior and inferior colliculi in the ferret.

    PubMed

    Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K

    2015-05-01

    The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Implicit sequence learning in deaf children with cochlear implants.

    PubMed

    Conway, Christopher M; Pisoni, David B; Anaya, Esperanza M; Karpicke, Jennifer; Henning, Shirley C

    2011-01-01

    Deaf children with cochlear implants (CIs) represent an intriguing opportunity to study neurocognitive plasticity and reorganization when sound is introduced following a period of auditory deprivation early in development. Although it is common to consider deafness as affecting hearing alone, it may be the case that auditory deprivation leads to more global changes in neurocognitive function. In this paper, we investigate implicit sequence learning abilities in deaf children with CIs using a novel task that measured learning through improvement to immediate serial recall for statistically consistent visual sequences. The results demonstrated two key findings. First, the deaf children with CIs showed disturbances in their visual sequence learning abilities relative to the typically developing normal-hearing children. Second, sequence learning was significantly correlated with a standardized measure of language outcome in the CI children. These findings suggest that a period of auditory deprivation has secondary effects related to general sequencing deficits, and that disturbances in sequence learning may at least partially explain why some deaf children still struggle with language following cochlear implantation. © 2010 Blackwell Publishing Ltd.

  13. Early experience shapes vocal neural coding and perception in songbirds

    PubMed Central

    Woolley, Sarah M. N.

    2012-01-01

    Songbirds, like humans, are highly accomplished vocal learners. The many parallels between speech and birdsong and conserved features of mammalian and avian auditory systems have led to the emergence of the songbird as a model system for studying the perceptual mechanisms of vocal communication. Laboratory research on songbirds allows the careful control of early life experience and high-resolution analysis of brain function during vocal learning, production and perception. Here, I review what songbird studies have revealed about the role of early experience in the development of vocal behavior, auditory perception and the processing of learned vocalizations by auditory neurons. The findings of these studies suggest general principles for how exposure to vocalizations during development and into adulthood influences the perception of learned vocal signals. PMID:22711657

  14. Neuromagnetic recordings reveal the temporal dynamics of auditory spatial processing in the human cortex.

    PubMed

    Tiitinen, Hannu; Salminen, Nelli H; Palomäki, Kalle J; Mäkinen, Ville T; Alku, Paavo; May, Patrick J C

    2006-03-20

    In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.

  15. Effects of methylphenidate on working memory components: influence of measurement.

    PubMed

    Bedard, Anne-Claude; Jain, Umesh; Johnson, Sheilah Hogg; Tannock, Rosemary

    2007-09-01

    To investigate the effects of methylphenidate (MPH) on components of working memory (WM) in attention-deficit hyperactivity disorder (ADHD) and determine the responsiveness of WM measures to MPH. Participants were a clinical sample of 50 children and adolescents with ADHD, aged 6 to 16 years old, who participated in an acute randomized, double-blind, placebo-controlled, crossover trial with single challenges of three MPH doses. Four components of WM were investigated, which varied in processing demands (storage versus manipulation of information) and modality (auditory-verbal; visual-spatial), each of which was indexed by a minimum of two separate measures. MPH improved the ability to store visual-spatial information irrespective of instrument used, but had no effects on the storage of auditory-verbal information. By contrast, MPH enhanced the ability to manipulate both auditory-verbal and visual-spatial information, although effects were instrument specific in both cases. MPH effects on WM are selective: they vary as a function of WM component and measurement.

  16. [Fragile X syndrome with Dandy-Walker variant: a clinical study of oral and written communicative manifestations].

    PubMed

    Lamônica, Dionísia Aparecida Cusin; Ferraz, Plínio Marcos Duarte Pinto; Ferreira, Amanda Tragueta; Prado, Lívia Maria do; Abramides, Dagma Venturini Marquez; Gejão, Mariana Germano

    2011-01-01

    The Fragile X syndrome is the most frequent cause of inherited intellectual disability. The Dandy-Walker variant is a specific constellation of neuroradiological findings. The present study reports oral and written communication findings in a 15-year-old boy with clinical and molecular diagnosis of Fragile X syndrome and neuroimaging findings consistent with Dandy-Walker variant. The speech-language pathology and audiology evaluation was carried out using the Communicative Behavior Observation, the Phonology assessment of the ABFW - Child Language Test, the Phonological Abilities Profile, the Test of School Performance, and the Illinois Test of Psycholinguistic Abilities. Stomatognathic system and hearing assessments were also performed. It was observed: phonological, semantic, pragmatic and morphosyntactic deficits in oral language; deficits in psycholinguistic abilities (auditory reception, verbal expression, combination of sounds, auditory and visual sequential memory, auditory closure, auditory and visual association); and morphological and functional alterations in the stomatognathic system. Difficulties in decoding the graphical symbols were observed in reading. In writing, the subject presented omissions, agglutinations and multiple representations with the predominant use of vowels, besides difficulties in visuo-spatial organization. In mathematics, in spite of the numeric recognition, the participant didn't accomplish arithmetic operations. No alterations were observed in the peripheral hearing evaluation. The constellation of behavioral, cognitive, linguistic and perceptual symptoms described for Fragile X syndrome, in addition to the structural central nervous alterations observed in the Dandy-Walker variant, caused outstanding interferences in the development of communicative abilities, in reading and writing learning, and in the individual's social integration.

  17. Auditory experience controls the maturation of song discrimination and sexual response in Drosophila

    PubMed Central

    Li, Xiaodong; Ishimoto, Hiroshi

    2018-01-01

    In birds and higher mammals, auditory experience during development is critical to discriminate sound patterns in adulthood. However, the neural and molecular nature of this acquired ability remains elusive. In fruit flies, acoustic perception has been thought to be innate. Here we report, surprisingly, that auditory experience of a species-specific courtship song in developing Drosophila shapes adult song perception and resultant sexual behavior. Preferences in the song-response behaviors of both males and females were tuned by social acoustic exposure during development. We examined the molecular and cellular determinants of this social acoustic learning and found that GABA signaling acting on the GABAA receptor Rdl in the pC1 neurons, the integration node for courtship stimuli, regulated auditory tuning and sexual behavior. These findings demonstrate that maturation of auditory perception in flies is unexpectedly plastic and is acquired socially, providing a model to investigate how song learning regulates mating preference in insects. PMID:29555017

  18. What drives the perceptual change resulting from speech motor adaptation? Evaluation of hypotheses in a Bayesian modeling framework

    PubMed Central

    Perrier, Pascal; Schwartz, Jean-Luc; Diard, Julien

    2018-01-01

    Shifts in perceptual boundaries resulting from speech motor learning induced by perturbations of the auditory feedback were taken as evidence for the involvement of motor functions in auditory speech perception. Beyond this general statement, the precise mechanisms underlying this involvement are not yet fully understood. In this paper we propose a quantitative evaluation of some hypotheses concerning the motor and auditory updates that could result from motor learning, in the context of various assumptions about the roles of the auditory and somatosensory pathways in speech perception. This analysis was made possible thanks to the use of a Bayesian model that implements these hypotheses by expressing the relationships between speech production and speech perception in a joint probability distribution. The evaluation focuses on how the hypotheses can (1) predict the location of perceptual boundary shifts once the perturbation has been removed, (2) account for the magnitude of the compensation in presence of the perturbation, and (3) describe the correlation between these two behavioral characteristics. Experimental findings about changes in speech perception following adaptation to auditory feedback perturbations serve as reference. Simulations suggest that they are compatible with a framework in which motor adaptation updates both the auditory-motor internal model and the auditory characterization of the perturbed phoneme, and where perception involves both auditory and somatosensory pathways. PMID:29357357

  19. Optimal Audiovisual Integration in the Ventriloquism Effect But Pervasive Deficits in Unisensory Spatial Localization in Amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-01-01

    Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.

  20. Evidence for enhanced discrimination of virtual auditory distance among blind listeners using level and direct-to-reverberant cues.

    PubMed

    Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina

    2013-02-01

    Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.

  1. Local and Global Spatial Organization of Interaural Level Difference and Frequency Preferences in Auditory Cortex

    PubMed Central

    Panniello, Mariangela; King, Andrew J; Dahmen, Johannes C; Walker, Kerry M M

    2018-01-01

    Abstract Despite decades of microelectrode recordings, fundamental questions remain about how auditory cortex represents sound-source location. Here, we used in vivo 2-photon calcium imaging to measure the sensitivity of layer II/III neurons in mouse primary auditory cortex (A1) to interaural level differences (ILDs), the principal spatial cue in this species. Although most ILD-sensitive neurons preferred ILDs favoring the contralateral ear, neurons with either midline or ipsilateral preferences were also present. An opponent-channel decoder accurately classified ILDs using the difference in responses between populations of neurons that preferred contralateral-ear-greater and ipsilateral-ear-greater stimuli. We also examined the spatial organization of binaural tuning properties across the imaged neurons with unprecedented resolution. Neurons driven exclusively by contralateral ear stimuli or by binaural stimulation occasionally formed local clusters, but their binaural categories and ILD preferences were not spatially organized on a more global scale. In contrast, the sound frequency preferences of most neurons within local cortical regions fell within a restricted frequency range, and a tonotopic gradient was observed across the cortical surface of individual mice. These results indicate that the representation of ILDs in mouse A1 is comparable to that of most other mammalian species, and appears to lack systematic or consistent spatial order. PMID:29136122

  2. Learning Styles.

    ERIC Educational Resources Information Center

    Missouri Univ., Columbia. Coll. of Education.

    Information is provided regarding major learning styles and other factors important to student learning. Several typically asked questions are presented regarding different learning styles (visual, auditory, tactile and kinesthetic, and multisensory learning), associated considerations, determining individuals' learning styles, and appropriate…

  3. Compatibility of motion facilitates visuomotor synchronization.

    PubMed

    Hove, Michael J; Spivey, Michael J; Krumhansl, Carol L

    2010-12-01

    Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1, synchronization success rates increased dramatically for spatiotemporal sequences of both geometric and biological forms over flashing sequences. In Experiment 2, synchronization performance was best when target sequences and movements were directionally compatible (i.e., simultaneously down), followed by orthogonal stimuli, and was poorest for incompatible moving stimuli and flashing stimuli. In Experiment 3, synchronization performance was best with auditory sequences, followed by compatible moving stimuli, and was worst for flashing and fading stimuli. Results indicate that visuomotor synchronization improves dramatically with compatible spatial information. However, an auditory advantage in sensorimotor synchronization persists.

  4. Synchronized tapping facilitates learning sound sequences as indexed by the P300.

    PubMed

    Kamiyama, Keiko S; Okanoya, Kazuo

    2014-01-01

    The purpose of the present study was to determine whether and how single finger tapping in synchrony with sound sequences contributed to the auditory processing of them. The participants learned two unfamiliar sound sequences via different methods. In the tapping condition, they learned an auditory sequence while they tapped in synchrony with each sound onset. In the no tapping condition, they learned another sequence while they kept pressing a key until the sequence ended. After these learning sessions, we presented the two melodies again and recorded event-related potentials (ERPs). During the ERP recordings, 10% of the tones within each melody deviated from the original tones. An analysis of the grand average ERPs showed that deviant stimuli elicited a significant P300 in the tapping but not in the no-tapping condition. In addition, the significance of the P300 effect in the tapping condition increased as the participants showed highly synchronized tapping behavior during the learning sessions. These results indicated that single finger tapping promoted the conscious detection and evaluation of deviants within the learned sequences. The effect was related to individuals' musical ability to coordinate their finger movements along with external auditory events.

  5. Synchronized tapping facilitates learning sound sequences as indexed by the P300

    PubMed Central

    Kamiyama, Keiko S.; Okanoya, Kazuo

    2014-01-01

    The purpose of the present study was to determine whether and how single finger tapping in synchrony with sound sequences contributed to the auditory processing of them. The participants learned two unfamiliar sound sequences via different methods. In the tapping condition, they learned an auditory sequence while they tapped in synchrony with each sound onset. In the no tapping condition, they learned another sequence while they kept pressing a key until the sequence ended. After these learning sessions, we presented the two melodies again and recorded event-related potentials (ERPs). During the ERP recordings, 10% of the tones within each melody deviated from the original tones. An analysis of the grand average ERPs showed that deviant stimuli elicited a significant P300 in the tapping but not in the no-tapping condition. In addition, the significance of the P300 effect in the tapping condition increased as the participants showed highly synchronized tapping behavior during the learning sessions. These results indicated that single finger tapping promoted the conscious detection and evaluation of deviants within the learned sequences. The effect was related to individuals’ musical ability to coordinate their finger movements along with external auditory events. PMID:25400564

  6. Enhanced auditory spatial localization in blind echolocators.

    PubMed

    Vercillo, Tiziana; Milne, Jennifer L; Gori, Monica; Goodale, Melvyn A

    2015-01-01

    Echolocation is the extraordinary ability to represent the external environment by using reflected sound waves from self-generated auditory pulses. Blind human expert echolocators show extremely precise spatial acuity and high accuracy in determining the shape and motion of objects by using echoes. In the current study, we investigated whether or not the use of echolocation would improve the representation of auditory space, which is severely compromised in congenitally blind individuals (Gori et al., 2014). The performance of three blind expert echolocators was compared to that of 6 blind non-echolocators and 11 sighted participants. Two tasks were performed: (1) a space bisection task in which participants judged whether the second of a sequence of three sounds was closer in space to the first or the third sound and (2) a minimum audible angle task in which participants reported which of two sounds presented successively was located more to the right. The blind non-echolocating group showed a severe impairment only in the space bisection task compared to the sighted group. Remarkably, the three blind expert echolocators performed both spatial tasks with similar or even better precision and accuracy than the sighted group. These results suggest that echolocation may improve the general sense of auditory space, most likely through a process of sensory calibration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Neural Correlates of Sound Localization in Complex Acoustic Environments

    PubMed Central

    Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto

    2013-01-01

    Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185

  8. Listening to Sentences in Noise: Revealing Binaural Hearing Challenges in Patients with Schizophrenia.

    PubMed

    Abdul Wahab, Noor Alaudin; Zakaria, Mohd Normani; Abdul Rahman, Abdul Hamid; Sidek, Dinsuhaimi; Wahab, Suzaily

    2017-11-01

    The present, case-control, study investigates binaural hearing performance in schizophrenia patients towards sentences presented in quiet and noise. Participants were twenty-one healthy controls and sixteen schizophrenia patients with normal peripheral auditory functions. The binaural hearing was examined in four listening conditions by using the Malay version of hearing in noise test. The syntactically and semantically correct sentences were presented via headphones to the randomly selected subjects. In each condition, the adaptively obtained reception thresholds for speech (RTS) were used to determine RTS noise composite and spatial release from masking. Schizophrenia patients demonstrated significantly higher mean RTS value relative to healthy controls (p=0.018). The large effect size found in three listening conditions, i.e., in quiet (d=1.07), noise right (d=0.88) and noise composite (d=0.90) indicates statistically significant difference between the groups. However, noise front and noise left conditions show medium (d=0.61) and small (d=0.50) effect size respectively. No statistical difference between groups was noted in regards to spatial release from masking on right (p=0.305) and left (p=0.970) ear. The present findings suggest an abnormal unilateral auditory processing in central auditory pathway in schizophrenia patients. Future studies to explore the role of binaural and spatial auditory processing were recommended.

  9. A virtual display system for conveying three-dimensional acoustic information

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Wightman, Frederic L.; Foster, Scott H.

    1988-01-01

    The development of a three-dimensional auditory display system is discussed. Theories of human sound localization and techniques for synthesizing various features of auditory spatial perceptions are examined. Psychophysical data validating the system are presented. The human factors applications of the system are considered.

  10. AUDITORY ASSOCIATIVE MEMORY AND REPRESENTATIONAL PLASTICITY IN THE PRIMARY AUDITORY CORTEX

    PubMed Central

    Weinberger, Norman M.

    2009-01-01

    Historically, the primary auditory cortex has been largely ignored as a substrate of auditory memory, perhaps because studies of associative learning could not reveal the plasticity of receptive fields (RFs). The use of a unified experimental design, in which RFs are obtained before and after standard training (e.g., classical and instrumental conditioning) revealed associative representational plasticity, characterized by facilitation of responses to tonal conditioned stimuli (CSs) at the expense of other frequencies, producing CS-specific tuning shifts. Associative representational plasticity (ARP) possesses the major attributes of associative memory: it is highly specific, discriminative, rapidly acquired, consolidates over hours and days and can be retained indefinitely. The nucleus basalis cholinergic system is sufficient both for the induction of ARP and for the induction of specific auditory memory, including control of the amount of remembered acoustic details. Extant controversies regarding the form, function and neural substrates of ARP appear largely to reflect different assumptions, which are explicitly discussed. The view that the forms of plasticity are task-dependent is supported by ongoing studies in which auditory learning involves CS-specific decreases in threshold or bandwidth without affecting frequency tuning. Future research needs to focus on the factors that determine ARP and their functions in hearing and in auditory memory. PMID:17344002

  11. Pre-attentive, context-specific representation of fear memory in the auditory cortex of rat.

    PubMed

    Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu

    2013-01-01

    Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS) of a 20-kHz tone and an unconditioned stimulus (US) of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.

  12. Dividing time: concurrent timing of auditory and visual events by young and elderly adults.

    PubMed

    McAuley, J Devin; Miller, Jonathan P; Wang, Mo; Pang, Kevin C H

    2010-07-01

    This article examines age differences in individual's ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults, in contrast, showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory targets. Age-related impairments were associated with a decrease in working memory span; moreover, the relationship between working memory and timing performance was largest for visual targets in divided attention conditions.

  13. Learning where to feed: the use of social information in flower-visiting Pallas' long-tongued bats (Glossophaga soricina).

    PubMed

    Rose, Andreas; Kolar, Miriam; Tschapka, Marco; Knörnschild, Mirjam

    2016-03-01

    Social learning is a widespread phenomenon among vertebrates that influences various patterns of behaviour and is often reported with respect to foraging behaviour. The use of social information by foraging bats was documented in insectivorous, carnivorous and frugivorous species, but there are little data whether flower-visiting nectarivorous bats (Phyllostomidae: Glossophaginae) can acquire information about food from other individuals. In this study, we conducted an experiment with a demonstrator-observer paradigm to investigate whether flower-visiting Pallas' long-tongued bats (Glossophaga soricina) are able to socially learn novel flower positions via observation of, or interaction with, knowledgeable conspecifics. The results demonstrate that flower-visiting G. soricina are able to use social information for the location of novel flower positions and can thereby reduce energy-costly search efforts. This social transmission is explainable as a result of local enhancement; learning bats might rely on both visual and echo-acoustical perception and are likely to eavesdrop on auditory cues that are emitted by feeding conspecifics. We additionally tested the spatial memory capacity of former demonstrator bats when retrieving a learned flower position, and the results indicate that flower-visiting bats remember a learned flower position after several weeks.

  14. Test performance and classification statistics for the Rey Auditory Verbal Learning Test in selected clinical samples.

    PubMed

    Schoenberg, Mike R; Dawson, Kyra A; Duff, Kevin; Patton, Doyle; Scott, James G; Adams, Russell L

    2006-10-01

    The Rey Auditory Verbal Learning Test [RAVLT; Rey, A. (1941). L'examen psychologique dans les cas d'encéphalopathie traumatique. Archives de Psychologie, 28, 21] is a commonly used neuropsychological measure that assesses verbal learning and memory. Normative data have been compiled [Schmidt, M. (1996). Rey Auditory and Verbal Learning Test: A handbook. Los Angeles, CA: Western Psychological Services]. When assessing an individual suspected of neurological dysfunction, useful comparisons include the extent that the patient deviates from healthy peers and also how closely the subject's performance matches those with known brain injury. This study provides the means and S.D.'s of 392 individuals with documented neurological dysfunction [closed head TBI (n=68), neoplasms (n=57), stroke (n=47), Dementia of the Alzheimer's type (n=158), and presurgical epilepsy left seizure focus (n=28), presurgical epilepsy right seizure focus (n=34)] and 122 patients with no known neurological dysfunction and psychiatric complaints. Patients were stratified into three age groups, 16-35, 36-59, and 60-88. Data were provided for trials I-V, List B, immediate recall, 30-min delayed recall, and recognition. Classification characteristics of the RAVLT using [Schmidt, M. (1996). Rey Auditory and Verbal Learning Test: A handbook. Los Angeles, CA: Western Psychological Services] meta-norms found the RAVLT to best distinguish patients suspected of Alzheimer's disease from the psychiatric comparison group.

  15. Spatial Attention and Audiovisual Interactions in Apparent Motion

    ERIC Educational Resources Information Center

    Sanabria, Daniel; Soto-Faraco, Salvador; Spence, Charles

    2007-01-01

    In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either…

  16. Multisensory Integration Affects Visuo-Spatial Working Memory

    ERIC Educational Resources Information Center

    Botta, Fabiano; Santangelo, Valerio; Raffone, Antonino; Sanabria, Daniel; Lupianez, Juan; Belardinelli, Marta Olivetti

    2011-01-01

    In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial…

  17. Characterizing spatial tuning functions of neurons in the auditory cortex of young and aged monkeys: a new perspective on old data

    PubMed Central

    Engle, James R.; Recanzone, Gregg H.

    2012-01-01

    Age-related hearing deficits are a leading cause of disability among the aged. While some forms of hearing deficits are peripheral in origin, others are centrally mediated. One such deficit is the ability to localize sounds, a critical component for segregating different acoustic objects and events, which is dependent on the auditory cortex. Recent evidence indicates that in aged animals the normal sharpening of spatial tuning between neurons in primary auditory cortex to the caudal lateral field does not occur as it does in younger animals. As a decrease in inhibition with aging is common in the ascending auditory system, it is possible that this lack of spatial tuning sharpening is due to a decrease in inhibition at different periods within the response. It is also possible that spatial tuning was decreased as a consequence of reduced inhibition at non-best locations. In this report we found that aged animals had greater activity throughout the response period, but primarily during the onset of the response. This was most prominent at non-best directions, which is consistent with the hypothesis that inhibition is a primary mechanism for sharpening spatial tuning curves. We also noted that in aged animals the latency of the response was much shorter than in younger animals, which is consistent with a decrease in pre-onset inhibition. These results can be interpreted in the context of a failure of the timing and efficiency of feed-forward thalamo-cortical and cortico-cortical circuits in aged animals. Such a mechanism, if generalized across cortical areas, could play a major role in age-related cognitive decline. PMID:23316160

  18. Interconnected growing self-organizing maps for auditory and semantic acquisition modeling

    PubMed Central

    Cao, Mengxue; Li, Aijun; Fang, Qiang; Kaufmann, Emily; Kröger, Bernd J.

    2014-01-01

    Based on the incremental nature of knowledge acquisition, in this study we propose a growing self-organizing neural network approach for modeling the acquisition of auditory and semantic categories. We introduce an Interconnected Growing Self-Organizing Maps (I-GSOM) algorithm, which takes associations between auditory information and semantic information into consideration, in this paper. Direct phonetic–semantic association is simulated in order to model the language acquisition in early phases, such as the babbling and imitation stages, in which no phonological representations exist. Based on the I-GSOM algorithm, we conducted experiments using paired acoustic and semantic training data. We use a cyclical reinforcing and reviewing training procedure to model the teaching and learning process between children and their communication partners. A reinforcing-by-link training procedure and a link-forgetting procedure are introduced to model the acquisition of associative relations between auditory and semantic information. Experimental results indicate that (1) I-GSOM has good ability to learn auditory and semantic categories presented within the training data; (2) clear auditory and semantic boundaries can be found in the network representation; (3) cyclical reinforcing and reviewing training leads to a detailed categorization as well as to a detailed clustering, while keeping the clusters that have already been learned and the network structure that has already been developed stable; and (4) reinforcing-by-link training leads to well-perceived auditory–semantic associations. Our I-GSOM model suggests that it is important to associate auditory information with semantic information during language acquisition. Despite its high level of abstraction, our I-GSOM approach can be interpreted as a biologically-inspired neurocomputational model. PMID:24688478

  19. On the Auditory-Proprioception Substitution Hypothesis: Movement Sonification in Two Deafferented Subjects Learning to Write New Characters

    PubMed Central

    Danna, Jérémy; Velay, Jean-Luc

    2017-01-01

    The aim of this study was to evaluate the compensatory effects of real-time auditory feedback on two proprioceptively deafferented subjects. The real-time auditory feedback was based on a movement sonification approach, consisting of translating some movement variables into synthetic sounds to make them audible. The two deafferented subjects and 16 age-matched control participants were asked to learn four new characters. The characters were learned under two different conditions, one without sonification and one with sonification, respecting a within-subject protocol. The results revealed that characters learned with sonification were reproduced more quickly and more fluently than characters learned without and that the effects of sonification were larger in deafferented than in control subjects. Secondly, whereas control subjects were able to learn the characters without sounds the deafferented subjects were able to learn them only when they were trained with sonification. Thirdly, although the improvement was still present in controls, the performance of deafferented subjects came back to the pre-test level 2 h after the training with sounds. Finally, the two deafferented subjects performed differently from each other, highlighting the importance of studying at least two subjects to better understand the loss of proprioception and its impact on motor control and learning. To conclude, movement sonification may compensate for a lack of proprioception, supporting the auditory-proprioception substitution hypothesis. However, sonification would act as a “sensory prosthesis” helping deafferented subjects to better feel their movements, without permanently modifying their motor performance once the prosthesis is removed. Potential clinical applications for motor rehabilitation are numerous: people with a limb prosthesis, with a stroke, or with some peripheral nerve injury may potentially be interested. PMID:28386211

  20. A robotic voice simulator and the interactive training for hearing-impaired people.

    PubMed

    Sawada, Hideyuki; Kitani, Mitsuki; Hayashi, Yasumori

    2008-01-01

    A talking and singing robot which adaptively learns the vocalization skill by means of an auditory feedback learning algorithm is being developed. The robot consists of motor-controlled vocal organs such as vocal cords, a vocal tract and a nasal cavity to generate a natural voice imitating a human vocalization. In this study, the robot is applied to the training system of speech articulation for the hearing-impaired, because the robot is able to reproduce their vocalization and to teach them how it is to be improved to generate clear speech. The paper briefly introduces the mechanical construction of the robot and how it autonomously acquires the vocalization skill in the auditory feedback learning by listening to human speech. Then the training system is described, together with the evaluation of the speech training by auditory impaired people.

  1. Unconscious Cross-Modal Priming of Auditory Sound Localization by Visual Words

    ERIC Educational Resources Information Center

    Ansorge, Ulrich; Khalid, Shah; Laback, Bernhard

    2016-01-01

    Little is known about the cross-modal integration of unconscious and conscious information. In the current study, we therefore tested whether the spatial meaning of an unconscious visual word, such as "up", influences the perceived location of a subsequently presented auditory target. Although cross-modal integration of unconscious…

  2. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution

    PubMed Central

    Hertz, Uri; Amedi, Amir

    2015-01-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756

  3. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution.

    PubMed

    Hertz, Uri; Amedi, Amir

    2015-08-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. © The Author 2014. Published by Oxford University Press.

  4. Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization

    PubMed Central

    Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2013-01-01

    The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338

  5. Compression of auditory space during forward self-motion.

    PubMed

    Teramoto, Wataru; Sakamoto, Shuichi; Furune, Fumimasa; Gyoba, Jiro; Suzuki, Yôiti

    2012-01-01

    Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point). In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial shifts in the auditory receptive field locations driven by afferent signals from vestibular system.

  6. Nonhomogeneous transfer reveals specificity in speech motor learning.

    PubMed

    Rochet-Capellan, Amélie; Richer, Lara; Ostry, David J

    2012-03-01

    Does motor learning generalize to new situations that are not experienced during training, or is motor learning essentially specific to the training situation? In the present experiments, we use speech production as a model to investigate generalization in motor learning. We tested for generalization from training to transfer utterances by varying the acoustical similarity between these two sets of utterances. During the training phase of the experiment, subjects received auditory feedback that was altered in real time as they repeated a single consonant-vowel-consonant utterance. Different groups of subjects were trained with different consonant-vowel-consonant utterances, which differed from a subsequent transfer utterance in terms of the initial consonant or vowel. During the adaptation phase of the experiment, we observed that subjects in all groups progressively changed their speech output to compensate for the perturbation (altered auditory feedback). After learning, we tested for generalization by having all subjects produce the same single transfer utterance while receiving unaltered auditory feedback. We observed limited transfer of learning, which depended on the acoustical similarity between the training and the transfer utterances. The gradients of generalization observed here are comparable to those observed in limb movement. The present findings are consistent with the conclusion that speech learning remains specific to individual instances of learning.

  7. Nonhomogeneous transfer reveals specificity in speech motor learning

    PubMed Central

    Rochet-Capellan, Amélie; Richer, Lara

    2012-01-01

    Does motor learning generalize to new situations that are not experienced during training, or is motor learning essentially specific to the training situation? In the present experiments, we use speech production as a model to investigate generalization in motor learning. We tested for generalization from training to transfer utterances by varying the acoustical similarity between these two sets of utterances. During the training phase of the experiment, subjects received auditory feedback that was altered in real time as they repeated a single consonant-vowel-consonant utterance. Different groups of subjects were trained with different consonant-vowel-consonant utterances, which differed from a subsequent transfer utterance in terms of the initial consonant or vowel. During the adaptation phase of the experiment, we observed that subjects in all groups progressively changed their speech output to compensate for the perturbation (altered auditory feedback). After learning, we tested for generalization by having all subjects produce the same single transfer utterance while receiving unaltered auditory feedback. We observed limited transfer of learning, which depended on the acoustical similarity between the training and the transfer utterances. The gradients of generalization observed here are comparable to those observed in limb movement. The present findings are consistent with the conclusion that speech learning remains specific to individual instances of learning. PMID:22190628

  8. Cellular and Molecular Underpinnings of Neuronal Assembly in the Central Auditory System during Mouse Development

    PubMed Central

    Di Bonito, Maria; Studer, Michèle

    2017-01-01

    During development, the organization of the auditory system into distinct functional subcircuits depends on the spatially and temporally ordered sequence of neuronal specification, differentiation, migration and connectivity. Regional patterning along the antero-posterior axis and neuronal subtype specification along the dorso-ventral axis intersect to determine proper neuronal fate and assembly of rhombomere-specific auditory subcircuits. By taking advantage of the increasing number of transgenic mouse lines, recent studies have expanded the knowledge of developmental mechanisms involved in the formation and refinement of the auditory system. Here, we summarize several findings dealing with the molecular and cellular mechanisms that underlie the assembly of central auditory subcircuits during mouse development, focusing primarily on the rhombomeric and dorso-ventral origin of auditory nuclei and their associated molecular genetic pathways. PMID:28469562

  9. Musical Experience Influences Statistical Learning of a Novel Language

    PubMed Central

    Shook, Anthony; Marian, Viorica; Bartolotti, James; Schroeder, Scott R.

    2014-01-01

    Musical experience may benefit learning a new language by enhancing the fidelity with which the auditory system encodes sound. In the current study, participants with varying degrees of musical experience were exposed to two statistically-defined languages consisting of auditory Morse-code sequences which varied in difficulty. We found an advantage for highly-skilled musicians, relative to less-skilled musicians, in learning novel Morse-code based words. Furthermore, in the more difficult learning condition, performance of lower-skilled musicians was mediated by their general cognitive abilities. We suggest that musical experience may lead to enhanced processing of statistical information and that musicians’ enhanced ability to learn statistical probabilities in a novel Morse-code language may extend to natural language learning. PMID:23505962

  10. Variables affecting learning in a simulation experience: a mixed methods study.

    PubMed

    Beischel, Kelly P

    2013-02-01

    The primary purpose of this study was to test a hypothesized model describing the direct effects of learning variables on anxiety and cognitive learning outcomes in a high-fidelity simulation (HFS) experience. The secondary purpose was to explain and explore student perceptions concerning the qualities and context of HFS affecting anxiety and learning. This study used a mixed methods quantitative-dominant explanatory design with concurrent qualitative data collection to examine variables affecting learning in undergraduate, beginning nursing students (N = 124). Being ready to learn, having a strong auditory-verbal learning style, and being prepared for simulation directly affected anxiety, whereas learning outcomes were directly affected by having strong auditory-verbal and hands-on learning styles. Anxiety did not quantitatively mediate cognitive learning outcomes as theorized, although students qualitatively reported debilitating levels of anxiety. This study advances nursing education science by providing evidence concerning variables affecting learning outcomes in HFS.

  11. Brain-wide maps of Fos expression during fear learning and recall.

    PubMed

    Cho, Jin-Hyung; Rendall, Sam D; Gray, Jesse M

    2017-04-01

    Fos induction during learning labels neuronal ensembles in the hippocampus that encode a specific physical environment, revealing a memory trace. In the cortex and other regions, the extent to which Fos induction during learning reveals specific sensory representations is unknown. Here we generate high-quality brain-wide maps of Fos mRNA expression during auditory fear conditioning and recall in the setting of the home cage. These maps reveal a brain-wide pattern of Fos induction that is remarkably similar among fear conditioning, shock-only, tone-only, and fear recall conditions, casting doubt on the idea that Fos reveals auditory-specific sensory representations. Indeed, novel auditory tones lead to as much gene induction in visual as in auditory cortex, while familiar (nonconditioned) tones do not appreciably induce Fos anywhere in the brain. Fos expression levels do not correlate with physical activity, suggesting that they are not determined by behavioral activity-driven alterations in sensory experience. In the thalamus, Fos is induced more prominently in limbic than in sensory relay nuclei, suggesting that Fos may be most sensitive to emotional state. Thus, our data suggest that Fos expression during simple associative learning labels ensembles activated generally by arousal rather than specifically by a particular sensory cue. © 2017 Cho et al.; Published by Cold Spring Harbor Laboratory Press.

  12. Brain-wide maps of Fos expression during fear learning and recall

    PubMed Central

    Cho, Jin-Hyung; Rendall, Sam D.; Gray, Jesse M.

    2017-01-01

    Fos induction during learning labels neuronal ensembles in the hippocampus that encode a specific physical environment, revealing a memory trace. In the cortex and other regions, the extent to which Fos induction during learning reveals specific sensory representations is unknown. Here we generate high-quality brain-wide maps of Fos mRNA expression during auditory fear conditioning and recall in the setting of the home cage. These maps reveal a brain-wide pattern of Fos induction that is remarkably similar among fear conditioning, shock-only, tone-only, and fear recall conditions, casting doubt on the idea that Fos reveals auditory-specific sensory representations. Indeed, novel auditory tones lead to as much gene induction in visual as in auditory cortex, while familiar (nonconditioned) tones do not appreciably induce Fos anywhere in the brain. Fos expression levels do not correlate with physical activity, suggesting that they are not determined by behavioral activity-driven alterations in sensory experience. In the thalamus, Fos is induced more prominently in limbic than in sensory relay nuclei, suggesting that Fos may be most sensitive to emotional state. Thus, our data suggest that Fos expression during simple associative learning labels ensembles activated generally by arousal rather than specifically by a particular sensory cue. PMID:28331016

  13. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. Sensori-Motor Learning with Movement Sonification: Perspectives from Recent Interdisciplinary Studies.

    PubMed

    Bevilacqua, Frédéric; Boyer, Eric O; Françoise, Jules; Houix, Olivier; Susini, Patrick; Roby-Brami, Agnès; Hanneton, Sylvain

    2016-01-01

    This article reports on an interdisciplinary research project on movement sonification for sensori-motor learning. First, we describe different research fields which have contributed to movement sonification, from music technology including gesture-controlled sound synthesis, sonic interaction design, to research on sensori-motor learning with auditory-feedback. In particular, we propose to distinguish between sound-oriented tasks and movement-oriented tasks in experiments involving interactive sound feedback. We describe several research questions and recently published results on movement control, learning and perception. In particular, we studied the effect of the auditory feedback on movements considering several cases: from experiments on pointing and visuo-motor tracking to more complex tasks where interactive sound feedback can guide movements, or cases of sensory substitution where the auditory feedback can inform on object shapes. We also developed specific methodologies and technologies for designing the sonic feedback and movement sonification. We conclude with a discussion on key future research challenges in sensori-motor learning with movement sonification. We also point out toward promising applications such as rehabilitation, sport training or product design.

  15. Spatial auditory processing in pinnipeds

    NASA Astrophysics Data System (ADS)

    Holt, Marla M.

    Given the biological importance of sound for a variety of activities, pinnipeds must be able to obtain spatial information about their surroundings thorough acoustic input in the absence of other sensory cues. The three chapters of this dissertation address spatial auditory processing capabilities of pinnipeds in air given that these amphibious animals use acoustic signals for reproduction and survival on land. Two chapters are comparative lab-based studies that utilized psychophysical approaches conducted in an acoustic chamber. Chapter 1 addressed the frequency-dependent sound localization abilities at azimuth of three pinniped species (the harbor seal, Phoca vitulina, the California sea lion, Zalophus californianus, and the northern elephant seal, Mirounga angustirostris). While performances of the sea lion and harbor seal were consistent with the duplex theory of sound localization, the elephant seal, a low-frequency hearing specialist, showed a decreased ability to localize the highest frequencies tested. In Chapter 2 spatial release from masking (SRM), which occurs when a signal and masker are spatially separated resulting in improvement in signal detectability relative to conditions in which they are co-located, was determined in a harbor seal and sea lion. Absolute and masked thresholds were measured at three frequencies and azimuths to determine the detection advantages afforded by this type of spatial auditory processing. Results showed that hearing sensitivity was enhanced by up to 19 and 12 dB in the harbor seal and sea lion, respectively, when the signal and masker were spatially separated. Chapter 3 was a field-based study that quantified both sender and receiver variables of the directional properties of male northern elephant seal calls produce within communication system that serves to delineate dominance status. This included measuring call directivity patterns, observing male-male vocally-mediated interactions, and an acoustic playback study. Results showed that males produce calls that were highly directional that together with social status influenced the response of receivers. Results from the playback study were able to confirm that the isolated acoustic components of this display resulted in similar responses among males. These three chapters provide further information about comparative aspects of spatial auditory processing in pinnipeds.

  16. Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.

    PubMed

    Vercillo, Tiziana; Gori, Monica

    2015-01-01

    The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.

  17. Cross-modal links among vision, audition, and touch in complex environments.

    PubMed

    Ferris, Thomas K; Sarter, Nadine B

    2008-02-01

    This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.

  18. Efficacy of Individual Computer-Based Auditory Training for People with Hearing Loss: A Systematic Review of the Evidence

    PubMed Central

    Henshaw, Helen; Ferguson, Melanie A.

    2013-01-01

    Background Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss. Objective This systematic review (PROSPERO 2011: CRD42011001406) evaluated the published evidence-base for the efficacy of individual computer-based auditory training to improve speech intelligibility, cognition and communication abilities in adults with hearing loss, with or without hearing aids or cochlear implants. Methods A systematic search of eight databases and key journals identified 229 articles published since 1996, 13 of which met the inclusion criteria. Data were independently extracted and reviewed by the two authors. Study quality was assessed using ten pre-defined scientific and intervention-specific measures. Results Auditory training resulted in improved performance for trained tasks in 9/10 articles that reported on-task outcomes. Although significant generalisation of learning was shown to untrained measures of speech intelligibility (11/13 articles), cognition (1/1 articles) and self-reported hearing abilities (1/2 articles), improvements were small and not robust. Where reported, compliance with computer-based auditory training was high, and retention of learning was shown at post-training follow-ups. Published evidence was of very-low to moderate study quality. Conclusions Our findings demonstrate that published evidence for the efficacy of individual computer-based auditory training for adults with hearing loss is not robust and therefore cannot be reliably used to guide intervention at this time. We identify a need for high-quality evidence to further examine the efficacy of computer-based auditory training for people with hearing loss. PMID:23675431

  19. Toward a dual-learning systems model of speech category learning

    PubMed Central

    Chandrasekaran, Bharath; Koslov, Seth R.; Maddox, W. T.

    2014-01-01

    More than two decades of work in vision posits the existence of dual-learning systems of category learning. The reflective system uses working memory to develop and test rules for classifying in an explicit fashion, while the reflexive system operates by implicitly associating perception with actions that lead to reinforcement. Dual-learning systems models hypothesize that in learning natural categories, learners initially use the reflective system and, with practice, transfer control to the reflexive system. The role of reflective and reflexive systems in auditory category learning and more specifically in speech category learning has not been systematically examined. In this article, we describe a neurobiologically constrained dual-learning systems theoretical framework that is currently being developed in speech category learning and review recent applications of this framework. Using behavioral and computational modeling approaches, we provide evidence that speech category learning is predominantly mediated by the reflexive learning system. In one application, we explore the effects of normal aging on non-speech and speech category learning. Prominently, we find a large age-related deficit in speech learning. The computational modeling suggests that older adults are less likely to transition from simple, reflective, unidimensional rules to more complex, reflexive, multi-dimensional rules. In a second application, we summarize a recent study examining auditory category learning in individuals with elevated depressive symptoms. We find a deficit in reflective-optimal and an enhancement in reflexive-optimal auditory category learning. Interestingly, individuals with elevated depressive symptoms also show an advantage in learning speech categories. We end with a brief summary and description of a number of future directions. PMID:25132827

  20. Psychometric properties of Persian version of the Sustained Auditory Attention Capacity Test in children with attention deficit-hyperactivity disorder.

    PubMed

    Soltanparast, Sanaz; Jafari, Zahra; Sameni, Seyed Jalal; Salehi, Masoud

    2014-01-01

    The purpose of the present study was to evaluate the psychometric properties (validity and reliability) of the Persian version of the Sustained Auditory Attention Capacity Test in children with attention deficit hyperactivity disorder. The Persian version of the Sustained Auditory Attention Capacity Test was constructed to assess sustained auditory attention using the method provided by Feniman and colleagues (2007). In this test, comments were provided to assess the child's attentional deficit by determining inattention and impulsiveness error, the total scores of the sustained auditory attention capacity test and attention span reduction index. In the present study for determining the validity and reliability of in both Rey Auditory Verbal Learning test and the Persian version of the Sustained Auditory Attention Capacity Test (SAACT), 46 normal children and 41 children with Attention Deficit Hyperactivity (ADHD), all right-handed and aged between 7 and 11 of both genders, were evaluated. In determining convergent validity, a negative significant correlation was found between the three parts of the Rey Auditory Verbal Learning test (first, fifth, and immediate recall) and all indicators of the SAACT except attention span reduction. By comparing the test scores between the normal and ADHD groups, discriminant validity analysis showed significant differences in all indicators of the test except for attention span reduction (p< 0.001). The Persian version of the Sustained Auditory Attention Capacity test has good validity and reliability, that matches other reliable tests, and it can be used for the identification of children with attention deficits and if they suspected to have Attention Deficit Hyperactivity Disorder.

  1. Auditory learning through active engagement with sound: biological impact of community music lessons in at-risk children

    PubMed Central

    Kraus, Nina; Slater, Jessica; Thompson, Elaine C.; Hornickel, Jane; Strait, Dana L.; Nicol, Trent; White-Schwoch, Travis

    2014-01-01

    The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the development of strategies for auditory learning. PMID:25414631

  2. Auditory learning through active engagement with sound: biological impact of community music lessons in at-risk children.

    PubMed

    Kraus, Nina; Slater, Jessica; Thompson, Elaine C; Hornickel, Jane; Strait, Dana L; Nicol, Trent; White-Schwoch, Travis

    2014-01-01

    The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the development of strategies for auditory learning.

  3. Different verbal learning strategies in autism spectrum disorder: evidence from the Rey Auditory Verbal Learning Test.

    PubMed

    Bowler, Dermot M; Limoges, Elyse; Mottron, Laurent

    2009-06-01

    The Rey Auditory Verbal Learning Test, which requires the free recall of the same list of 15 unrelated words over 5 trials, was administered to 21 high-functioning adolescents and adults with autism spectrum disorder (ASD) and 21 matched typical individuals. The groups showed similar overall levels of free recall, rates of learning over trials and subjective organisation of their recall. However, the primacy portion of the serial position curve of the ASD participants showed slower growth over trials than that of the typical participants. The implications of this finding for our understanding of memory in ASD are discussed.

  4. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain.

    PubMed

    Higgins, Irina; Stringer, Simon; Schnupp, Jan

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable.

  5. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain

    PubMed Central

    Stringer, Simon

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable. PMID:28797034

  6. Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians.

    PubMed

    Clayton, Kameron K; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D; Kidd, Gerald

    2016-01-01

    The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, "cocktail-party" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the "cocktail party problem".

  7. Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians

    PubMed Central

    Clayton, Kameron K.; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D.; Kidd, Gerald

    2016-01-01

    The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, “cocktail-party” like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the “cocktail party problem”. PMID:27384330

  8. Effects of speech intelligibility level on concurrent visual task performance.

    PubMed

    Payne, D G; Peters, L J; Birkmire, D P; Bonto, M A; Anastasi, J S; Wenger, M J

    1994-09-01

    Four experiments were performed to determine if changes in the level of speech intelligibility in an auditory task have an impact on performance in concurrent visual tasks. The auditory task used in each experiment was a memory search task in which subjects memorized a set of words and then decided whether auditorily presented probe items were members of the memorized set. The visual tasks used were an unstable tracking task, a spatial decision-making task, a mathematical reasoning task, and a probability monitoring task. Results showed that performance on the unstable tracking and probability monitoring tasks was unaffected by the level of speech intelligibility on the auditory task, whereas accuracy in the spatial decision-making and mathematical processing tasks was significantly worse at low speech intelligibility levels. The findings are interpreted within the framework of multiple resource theory.

  9. Evoked potential correlates of selective attention with multi-channel auditory inputs

    NASA Technical Reports Server (NTRS)

    Schwent, V. L.; Hillyard, S. A.

    1975-01-01

    Ten subjects were presented with random, rapid sequences of four auditory tones which were separated in pitch and apparent spatial position. The N1 component of the auditory vertex evoked potential (EP) measured relative to a baseline was observed to increase with attention. It was concluded that the N1 enhancement reflects a finely tuned selective attention to one stimulus channel among several concurrent, competing channels. This EP enhancement probably increases with increased information load on the subject.

  10. Converging Modalities Ground Abstract Categories: The Case of Politics

    PubMed Central

    Farias, Ana Rita; Garrido, Margarida V.; Semin, Gün R.

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal. PMID:23593360

  11. Converging modalities ground abstract categories: the case of politics.

    PubMed

    Farias, Ana Rita; Garrido, Margarida V; Semin, Gün R

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.

  12. Auditory rehabilitation after stroke: treatment of auditory processing disorders in stroke patients with personal frequency-modulated (FM) systems.

    PubMed

    Koohi, Nehzat; Vickers, Deborah; Chandrashekar, Hoskote; Tsang, Benjamin; Werring, David; Bamiou, Doris-Eva

    2017-03-01

    Auditory disability due to impaired auditory processing (AP) despite normal pure-tone thresholds is common after stroke, and it leads to isolation, reduced quality of life and physical decline. There are currently no proven remedial interventions for AP deficits in stroke patients. This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. Fifty stroke patients had baseline audiological assessments, AP tests and completed the (modified) Amsterdam Inventory for Auditory Disability and Hearing Handicap Inventory for Elderly questionnaires. Nine out of these 50 patients were diagnosed with disordered AP based on severe deficits in understanding speech in background noise but with normal pure-tone thresholds. These nine patients underwent spatial speech-in-noise testing in a sound-attenuating chamber (the "crescent of sound") with and without FM systems. The signal-to-noise ratio (SNR) for 50% correct speech recognition performance was measured with speech presented from 0° azimuth and competing babble from ±90° azimuth. Spatial release from masking (SRM) was defined as the difference between SNRs measured with co-located speech and babble and SNRs measured with spatially separated speech and babble. The SRM significantly improved when babble was spatially separated from target speech, while the patients had the FM systems in their ears compared to without the FM systems. Personal FM systems may substantially improve speech-in-noise deficits in stroke patients who are not eligible for conventional hearing aids. FMs are feasible in stroke patients and show promise to address impaired AP after stroke. Implications for Rehabilitation This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. All cases significantly improved speech perception in noise with the FM systems, when noise was spatially separated from the speech signal by 90° compared with unaided listening. Personal FM systems are feasible in stroke patients, and may be of benefit in just under 20% of this population, who are not eligible for conventional hearing aids.

  13. Primary auditory cortex regulates threat memory specificity.

    PubMed

    Wigestrand, Mattis B; Schiff, Hillary C; Fyhn, Marianne; LeDoux, Joseph E; Sears, Robert M

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used muscimol infusions in rats to show that discriminatory threat learning requires Au1 activity specifically during memory acquisition and retrieval, but not during consolidation. Memory specificity was similarly disrupted by infusion of PKMζ inhibitor peptide (ZIP) during memory storage. Our findings show that Au1 is required at critical memory phases and suggest that Au1 plasticity enables stimulus discrimination. © 2016 Wigestrand et al.; Published by Cold Spring Harbor Laboratory Press.

  14. Operator Performance Measures for Assessing Voice Communication Effectiveness

    DTIC Science & Technology

    1989-07-01

    performance and work- load assessment techniques have been based.I Broadbent (1958) described a limited capacity filter model of human information...INFORMATION PROCESSING 20 3.1.1. Auditory Attention 20 3.1.2. Auditory Memory 24 3.2. MODELS OF INFORMATION PROCESSING 24 3.2.1. Capacity Theories 25...Learning 0 Attention * Language Specialization • Decision Making• Problem Solving Auditory Information Processing Models of Processing Ooemtor

  15. Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals

    PubMed Central

    Genzel, Daria; Firzlaff, Uwe; Wiegrebe, Lutz

    2016-01-01

    Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating. PMID:27169504

  16. Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals.

    PubMed

    Genzel, Daria; Firzlaff, Uwe; Wiegrebe, Lutz; MacNeilage, Paul R

    2016-08-01

    Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating. Copyright © 2016 the American Physiological Society.

  17. Plasticity in neuromagnetic cortical responses suggests enhanced auditory object representation

    PubMed Central

    2013-01-01

    Background Auditory perceptual learning persistently modifies neural networks in the central nervous system. Central auditory processing comprises a hierarchy of sound analysis and integration, which transforms an acoustical signal into a meaningful object for perception. Based on latencies and source locations of auditory evoked responses, we investigated which stage of central processing undergoes neuroplastic changes when gaining auditory experience during passive listening and active perceptual training. Young healthy volunteers participated in a five-day training program to identify two pre-voiced versions of the stop-consonant syllable ‘ba’, which is an unusual speech sound to English listeners. Magnetoencephalographic (MEG) brain responses were recorded during two pre-training and one post-training sessions. Underlying cortical sources were localized, and the temporal dynamics of auditory evoked responses were analyzed. Results After both passive listening and active training, the amplitude of the P2m wave with latency of 200 ms increased considerably. By this latency, the integration of stimulus features into an auditory object for further conscious perception is considered to be complete. Therefore the P2m changes were discussed in the light of auditory object representation. Moreover, P2m sources were localized in anterior auditory association cortex, which is part of the antero-ventral pathway for object identification. The amplitude of the earlier N1m wave, which is related to processing of sensory information, did not change over the time course of the study. Conclusion The P2m amplitude increase and its persistence over time constitute a neuroplastic change. The P2m gain likely reflects enhanced object representation after stimulus experience and training, which enables listeners to improve their ability for scrutinizing fine differences in pre-voicing time. Different trajectories of brain and behaviour changes suggest that the preceding effect of a P2m increase relates to brain processes, which are necessary precursors of perceptual learning. Cautious discussion is required when interpreting the finding of a P2 amplitude increase between recordings before and after training and learning. PMID:24314010

  18. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    PubMed

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source. Copyright © 2018 the authors 0270-6474/18/383252-13$15.00/0.

  19. Auditory Space Perception in Left- and Right-Handers

    ERIC Educational Resources Information Center

    Ocklenburg, Sebastian; Hirnstein, Marco; Hausmann, Markus; Lewald, Jorg

    2010-01-01

    Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via…

  20. The Effects of Auditory Information on 4-Month-Old Infants' Perception of Trajectory Continuity

    ERIC Educational Resources Information Center

    Bremner, J. Gavin; Slater, Alan M.; Johnson, Scott P.; Mason, Uschi C.; Spring, Jo

    2012-01-01

    Young infants perceive an object's trajectory as continuous across occlusion provided the temporal or spatial gap in perception is small. In 3 experiments involving 72 participants the authors investigated the effects of different forms of auditory information on 4-month-olds' perception of trajectory continuity. Provision of dynamic auditory…

  1. Memory and learning with rapid audiovisual sequences

    PubMed Central

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  2. Memory and learning with rapid audiovisual sequences.

    PubMed

    Keller, Arielle S; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.

  3. Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.

    PubMed

    François, Clément; Schön, Daniele

    2014-02-01

    There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Intentional switching in auditory selective attention: Exploring age-related effects in a spatial setup requiring speech perception.

    PubMed

    Oberem, Josefa; Koch, Iring; Fels, Janina

    2017-06-01

    Using a binaural-listening paradigm, age-related differences in the ability to intentionally switch auditory selective attention between two speakers, defined by their spatial location, were examined. Therefore 40 normal-hearing participants (20 young, Ø 24.8years; 20 older Ø 67.8years) were tested. The spatial reproduction of stimuli was provided by headphones using head-related-transfer-functions of an artificial head. Spoken number words of two speakers were presented simultaneously to participants from two out of eight locations on the horizontal plane. Guided by a visual cue indicating the spatial location of the target speaker, the participants were asked to categorize the target's number word into smaller vs. greater than five while ignoring the distractor's speech. Results showed significantly higher reaction times and error rates for older participants. The relative influence of the spatial switch of the target-speaker (switch or repetition of speaker's direction in space) was identical across age groups. Congruency effects (stimuli spoken by target and distractor may evoke the same answer or different answers) were increased for older participants and depend on the target's position. Results suggest that the ability to intentionally switch auditory attention to a new cued location was unimpaired whereas it was generally harder for older participants to suppress processing the distractor's speech. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Exploring the temporal dynamics of sustained and transient spatial attention using steady-state visual evoked potentials.

    PubMed

    Zhang, Dan; Hong, Bo; Gao, Shangkai; Röder, Brigitte

    2017-05-01

    While the behavioral dynamics as well as the functional network of sustained and transient attention have extensively been studied, their underlying neural mechanisms have most often been investigated in separate experiments. In the present study, participants were instructed to perform an audio-visual spatial attention task. They were asked to attend to either the left or the right hemifield and to respond to deviant transient either auditory or visual stimuli. Steady-state visual evoked potentials (SSVEPs) elicited by two task irrelevant pattern reversing checkerboards flickering at 10 and 15 Hz in the left and the right hemifields, respectively, were used to continuously monitor the locus of spatial attention. The amplitude and phase of the SSVEPs were extracted for single trials and were separately analyzed. Sustained attention to one hemifield (spatial attention) as well as to the auditory modality (intermodal attention) increased the inter-trial phase locking of the SSVEP responses, whereas briefly presented visual and auditory stimuli decreased the single-trial SSVEP amplitude between 200 and 500 ms post-stimulus. This transient change of the single-trial amplitude was restricted to the SSVEPs elicited by the reversing checkerboard in the spatially attended hemifield and thus might reflect a transient re-orienting of attention towards the brief stimuli. Thus, the present results demonstrate independent, but interacting neural mechanisms of sustained and transient attentional orienting.

  6. Primitive Auditory Memory Is Correlated with Spatial Unmasking That Is Based on Direct-Reflection Integration

    PubMed Central

    Li, Huahui; Kong, Lingzhi; Wu, Xihong; Li, Liang

    2013-01-01

    In reverberant rooms with multiple-people talking, spatial separation between speech sources improves recognition of attended speech, even though both the head-shadowing and interaural-interaction unmasking cues are limited by numerous reflections. It is the perceptual integration between the direct wave and its reflections that bridges the direct-reflection temporal gaps and results in the spatial unmasking under reverberant conditions. This study further investigated (1) the temporal dynamic of the direct-reflection-integration-based spatial unmasking as a function of the reflection delay, and (2) whether this temporal dynamic is correlated with the listeners’ auditory ability to temporally retain raw acoustic signals (i.e., the fast decaying primitive auditory memory, PAM). The results showed that recognition of the target speech against the speech-masker background is a descending exponential function of the delay of the simulated target reflection. In addition, the temporal extent of PAM is frequency dependent and markedly longer than that for perceptual fusion. More importantly, the temporal dynamic of the speech-recognition function is significantly correlated with the temporal extent of the PAM of low-frequency raw signals. Thus, we propose that a chain process, which links the earlier-stage PAM with the later-stage correlation computation, perceptual integration, and attention facilitation, plays a role in spatially unmasking target speech under reverberant conditions. PMID:23658664

  7. Short-Term Memory for Space and Time Flexibly Recruit Complementary Sensory-Biased Frontal Lobe Attention Networks.

    PubMed

    Michalka, Samantha W; Kong, Lingqiang; Rosen, Maya L; Shinn-Cunningham, Barbara G; Somers, David C

    2015-08-19

    The frontal lobes control wide-ranging cognitive functions; however, functional subdivisions of human frontal cortex are only coarsely mapped. Here, functional magnetic resonance imaging reveals two distinct visual-biased attention regions in lateral frontal cortex, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), anatomically interdigitated with two auditory-biased attention regions, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic functional connectivity analysis demonstrates that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Interestingly, we observe that spatial and temporal short-term memory (STM), respectively, recruit visual and auditory attention networks in the frontal lobe, independent of sensory modality. These findings not only demonstrate that both sensory modality and information domain influence frontal lobe functional organization, they also demonstrate that spatial processing co-localizes with visual processing and that temporal processing co-localizes with auditory processing in lateral frontal cortex. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Intracranial mapping of auditory perception: event-related responses and electrocortical stimulation.

    PubMed

    Sinai, A; Crone, N E; Wied, H M; Franaszczuk, P J; Miglioretti, D; Boatman-Reich, D

    2009-01-01

    We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping.

  9. Intracranial mapping of auditory perception: Event-related responses and electrocortical stimulation

    PubMed Central

    Sinai, A.; Crone, N.E.; Wied, H.M.; Franaszczuk, P.J.; Miglioretti, D.; Boatman-Reich, D.

    2010-01-01

    Objective We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Methods Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. Results ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60 Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Conclusions Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. Significance These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping. PMID:19070540

  10. Cross-modal detection using various temporal and spatial configurations.

    PubMed

    Schirillo, James A

    2011-01-01

    To better understand temporal and spatial cross-modal interactions, two signal detection experiments were conducted in which an auditory target was sometimes accompanied by an irrelevant flash of light. In the first, a psychometric function for detecting a unisensory auditory target in varying signal-to-noise ratios (SNRs) was derived. Then auditory target detection was measured while an irrelevant light was presented with light/sound stimulus onset asynchronies (SOAs) between 0 and ±700 ms. When the light preceded the sound by 100 ms or was coincident, target detection (d') improved for low SNR conditions. In contrast, for larger SOAs (350 and 700 ms), the behavioral gain resulted from a change in both d' and response criterion (β). However, when the light followed the sound, performance changed little. In the second experiment, observers detected multimodal target sounds at eccentricities of ±8°, and ±24°. Sensitivity benefits occurred at both locations, with a larger change at the more peripheral location. Thus, both temporal and spatial factors affect signal detection measures, effectively parsing sensory and decision-making processes.

  11. Learning Disability Assessed through Audiologic and Physiologic Measures: A Case Study.

    ERIC Educational Resources Information Center

    Greenblatt, Edward R.; And Others

    1983-01-01

    The report describes a child with central auditory dysfunction, the first reported case where brain-stem dysfunction on audiologic tests were associated with specific electrophysiologic changes in the brain-stem auditory-evoked responses. (Author/CL)

  12. Double dissociation of 'what' and 'where' processing in auditory cortex.

    PubMed

    Lomber, Stephen G; Malhotra, Shveta

    2008-05-01

    Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.

  13. An association between auditory-visual synchrony processing and reading comprehension: Behavioral and electrophysiological evidence

    PubMed Central

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2016-01-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060

  14. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    PubMed

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  15. Methods for detecting long-term CNS dysfunction after prenatal exposure to neurotoxins.

    PubMed

    Vorhees, C V

    1997-11-01

    Current U.S. Environmental Protection Agency regulatory guidelines for developmental neurotoxicity emphasize functional categories such as motor activity, auditory startle, and learning and memory. A single test of some simple form of learning and memory is accepted to meet the latter category. The rationale for this emphasis has been that sensitive and reliable methods for assessing complex learning and memory are either not available or are too burdensome, and that insufficient data exist to endorse one approach over another. There has been little discussion of the fact that learning and memory is not a single identifiable functional category and no single test can assess all types of learning and memory. Three methods for assessing complex learning and memory are presented that assess two different types of learning and memory, are relatively efficient to conduct, and are sensitive to several known neurobehavioral teratogens. The tests are a 9-unit multiple-T swimming maze, and the Morris and Barnes mazes. The first of these assesses sequential learning, while the latter two assess spatial learning. A description of each test is provided, along with procedures for their use, and data exemplifying effects obtained using developmental exposure to phenytoin, methamphetamine, and MDMA. It is argued that multiple tests of learning and memory are required to ascertain cognitive deficits; something no single method can accomplish. Methods for acoustic startle are also presented.

  16. Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns.

    PubMed

    Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J

    2016-01-01

    Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10- and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may strongly depend on individual attention and auditory discrimination abilities.

  17. Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns

    PubMed Central

    Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J.

    2016-01-01

    Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10− and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may strongly depend on individual attention and auditory discrimination abilities. PMID:27932941

  18. Effect of Auditory Constraints on Motor Performance Depends on Stage of Recovery Post-Stroke

    PubMed Central

    Aluru, Viswanath; Lu, Ying; Leung, Alan; Verghese, Joe; Raghavan, Preeti

    2014-01-01

    In order to develop evidence-based rehabilitation protocols post-stroke, one must first reconcile the vast heterogeneity in the post-stroke population and develop protocols to facilitate motor learning in the various subgroups. The main purpose of this study is to show that auditory constraints interact with the stage of recovery post-stroke to influence motor learning. We characterized the stages of upper limb recovery using task-based kinematic measures in 20 subjects with chronic hemiparesis. We used a bimanual wrist extension task, performed with a custom-made wrist trainer, to facilitate learning of wrist extension in the paretic hand under four auditory conditions: (1) without auditory cueing; (2) to non-musical happy sounds; (3) to self-selected music; and (4) to a metronome beat set at a comfortable tempo. Two bimanual trials (15 s each) were followed by one unimanual trial with the paretic hand over six cycles under each condition. Clinical metrics, wrist and arm kinematics, and electromyographic activity were recorded. Hierarchical cluster analysis with the Mahalanobis metric based on baseline speed and extent of wrist movement stratified subjects into three distinct groups, which reflected their stage of recovery: spastic paresis, spastic co-contraction, and minimal paresis. In spastic paresis, the metronome beat increased wrist extension, but also increased muscle co-activation across the wrist. In contrast, in spastic co-contraction, no auditory stimulation increased wrist extension and reduced co-activation. In minimal paresis, wrist extension did not improve under any condition. The results suggest that auditory task constraints interact with stage of recovery during motor learning after stroke, perhaps due to recruitment of distinct neural substrates over the course of recovery. The findings advance our understanding of the mechanisms of progression of motor recovery and lay the foundation for personalized treatment algorithms post-stroke. PMID:25002859

  19. Learning styles of first-year medical students attending Erciyes University in Kayseri, Turkey.

    PubMed

    Baykan, Zeynep; Naçar, Melis

    2007-06-01

    Educational researchers postulate that every individual has a different learning style. The aim of this descriptive study was to determine the learning styles of first-year medical students using the Turkish version of the visual, auditory, read-write, kinesthetic (VARK) questionnaire. This study was performed at the Department of Medical Education of Erciyes University in February 2006. The Turkish version of the VARK questionnaire was administered to first-year medical students to determine their preferred mode of learning. According to the VARK questionnaire, students were divided into five groups (visual learners, read-write learners, auditory learners, kinesthetic learners, and multimodal learners). The unimodality preference was 36.1% and multimodality was 63.9%. Among the students who participated in the study (155 students), 23.3% were kinesthetic, 7.7% were auditory, 3.2% were visual, and 1.9% were read-write learners. Some students preferred multiple modes: bimodal (30.3%), trimodal (20.7%), and quadmodal (12.9%). The learning styles did not differ between male and female students, and no statistically significant difference was determined between the first-semester grade average points and learning styles. Knowing that our students have different preferred learning modes will help the medical instructors in our faculty develop appropriate learning approaches and explore opportunities so that they will be able to make the educational experience more productive.

  20. Sonic morphology: Aesthetic dimensional auditory spatial awareness

    NASA Astrophysics Data System (ADS)

    Whitehouse, Martha M.

    The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.

  1. The effect of spatial auditory landmarks on ambulation.

    PubMed

    Karim, Adham M; Rumalla, Kavelin; King, Laurie A; Hullar, Timothy E

    2018-02-01

    The maintenance of balance and posture is a result of the collaborative efforts of vestibular, proprioceptive, and visual sensory inputs, but a fourth neural input, audition, may also improve balance. Here, we tested the hypothesis that auditory inputs function as environmental spatial landmarks whose effectiveness depends on sound localization ability during ambulation. Eight blindfolded normal young subjects performed the Fukuda-Unterberger test in three auditory conditions: silence, white noise played through headphones (head-referenced condition), and white noise played through a loudspeaker placed directly in front at 135 centimeters away from the ear at ear height (earth-referenced condition). For the earth-referenced condition, an additional experiment was performed where the effect of moving the speaker azimuthal position to 45, 90, 135, and 180° was tested. Subjects performed significantly better in the earth-referenced condition than in the head-referenced or silent conditions. Performance progressively decreased over the range from 0° to 135° but all subjects then improved slightly at the 180° compared to the 135° condition. These results suggest that presence of sound dramatically improves the ability to ambulate when vision is limited, but that sound sources must be located in the external environment in order to improve balance. This supports the hypothesis that they act by providing spatial landmarks against which head and body movement and orientation may be compared and corrected. Balance improvement in the azimuthal plane mirrors sensitivity to sound movement at similar positions, indicating that similar auditory mechanisms may underlie both processes. These results may help optimize the use of auditory cues to improve balance in particular patient populations. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Investigating three types of continuous auditory feedback in visuo-manual tracking.

    PubMed

    Boyer, Éric O; Bevilacqua, Frédéric; Susini, Patrick; Hanneton, Sylvain

    2017-03-01

    The use of continuous auditory feedback for motor control and learning is still understudied and deserves more attention regarding fundamental mechanisms and applications. This paper presents the results of three experiments studying the contribution of task-, error-, and user-related sonification to visuo-manual tracking and assessing its benefits on sensorimotor learning. First results show that sonification can help decreasing the tracking error, as well as increasing the energy in participant's movement. In the second experiment, when alternating feedback presence, the user-related sonification did not show feedback dependency effects, contrary to the error and task-related feedback. In the third experiment, a reduced exposure of 50% diminished the positive effect of sonification on performance, whereas the increase of the average energy with sound was still significant. In a retention test performed on the next day without auditory feedback, movement energy was still superior for the groups previously trained with the feedback. Although performance was not affected by sound, a learning effect was measurable in both sessions and the user-related group improved its performance also in the retention test. These results confirm that a continuous auditory feedback can be beneficial for movement training and also show an interesting effect of sonification on movement energy. User-related sonification can prevent feedback dependency and increase retention. Consequently, sonification of the user's own motion appears as a promising solution to support movement learning with interactive feedback.

  3. Auditory Brainstem Response to Complex Sounds Predicts Self-Reported Speech-in-Noise Performance

    ERIC Educational Resources Information Center

    Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina

    2013-01-01

    Purpose: To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette,…

  4. Audiovisual Perception of Noise Vocoded Speech in Dyslexic and Non-Dyslexic Adults: The Role of Low-Frequency Visual Modulations

    ERIC Educational Resources Information Center

    Megnin-Viggars, Odette; Goswami, Usha

    2013-01-01

    Visual speech inputs can enhance auditory speech information, particularly in noisy or degraded conditions. The natural statistics of audiovisual speech highlight the temporal correspondence between visual and auditory prosody, with lip, jaw, cheek and head movements conveying information about the speech envelope. Low-frequency spatial and…

  5. A Quality Improvement Study on Avoidable Stressors and Countermeasures Affecting Surgical Motor Performance and Learning

    PubMed Central

    Conrad, Claudius; Konuk, Yusuf; Werner, Paul D.; Cao, Caroline G.; Warshaw, Andrew L.; Rattner, David W.; Stangenberg, Lars; Ott, Harald C.; Jones, Daniel B.; Miller, Diane L; Gee, Denise W.

    2012-01-01

    OBJECTIVE To explore how the two most important components of surgical performance - speed and accuracy - are influenced by different forms of stress and what the impact of music on these factors is. SUMMARY BACKGROUND DATA Based on a recently published pilot study on surgical experts, we designed an experiment examining the effects of auditory stress, mental stress, and music on surgical performance and learning, and then correlated the data psychometric measures to the role of music in a novice surgeon’s life. METHODS 31 surgeons were recruited for a crossover study. Surgeons were randomized to four simple standardized tasks to be performed on the Surgical SIM VR laparoscopic simulator, allowing exact tracking of speed and accuracy. Tasks were performed under a variety of conditions, including silence, dichotic music (auditory stress), defined classical music (auditory relaxation), and mental loading (mental arithmetic tasks). Tasks were performed twice to test for memory consolidation and to accommodate for baseline variability. Performance was correlated to the Brief Musical Experience Questionnaire (MEQ). RESULTS Mental loading influences performance with respect to accuracy, speed, and recall more negatively than does auditory stress. Defined classical music might lead to minimally worse performance initially, but leads to significantly improved memory consolidation. Furthermore, psychologic testing of the volunteers suggests that surgeons with greater musical commitment, measured by the MEQ, perform worse under the mental loading condition. CONCLUSION Mental distraction and auditory stress negatively affect specific components of surgical learning and performance. If used appropriately, classical music may positively affect surgical memory consolidation. It also may be possible to predict surgeons’ performance and learning under stress through psychological tests on the role of music in a surgeon’s life. Further investigation is necessary to determine the cognitive processes behind these correlations. PMID:22584632

  6. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise

    PubMed Central

    White-Schwoch, Travis; Davies, Evan C.; Thompson, Elaine C.; Carr, Kali Woodruff; Nicol, Trent; Bradlow, Ann R.; Kraus, Nina

    2015-01-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But learning rarely occurs under ideal listening conditions—children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3–5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features—even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response properties in this age group. These normative metrics may be useful clinically to evaluate auditory processing difficulties during early childhood. PMID:26113025

  7. Functional MRI of Auditory Responses in the Zebra Finch Forebrain Reveals a Hierarchical Organisation Based on Signal Strength but Not Selectivity

    PubMed Central

    Boumans, Tiny; Gobes, Sharon M. H.; Poirier, Colline; Theunissen, Frederic E.; Vandersmissen, Liesbeth; Pintjens, Wouter; Verhoye, Marleen; Bolhuis, Johan J.; Van der Linden, Annemie

    2008-01-01

    Background Male songbirds learn their songs from an adult tutor when they are young. A network of brain nuclei known as the ‘song system’ is the likely neural substrate for sensorimotor learning and production of song, but the neural networks involved in processing the auditory feedback signals necessary for song learning and maintenance remain unknown. Determining which regions show preferential responsiveness to the bird's own song (BOS) is of great importance because neurons sensitive to self-generated vocalisations could mediate this auditory feedback process. Neurons in the song nuclei and in a secondary auditory area, the caudal medial mesopallium (CMM), show selective responses to the BOS. The aim of the present study is to investigate the emergence of BOS selectivity within the network of primary auditory sub-regions in the avian pallium. Methods and Findings Using blood oxygen level-dependent (BOLD) fMRI, we investigated neural responsiveness to natural and manipulated self-generated vocalisations and compared the selectivity for BOS and conspecific song in different sub-regions of the thalamo-recipient area Field L. Zebra finch males were exposed to conspecific song, BOS and to synthetic variations on BOS that differed in spectro-temporal and/or modulation phase structure. We found significant differences in the strength of BOLD responses between regions L2a, L2b and CMM, but no inter-stimuli differences within regions. In particular, we have shown that the overall signal strength to song and synthetic variations thereof was different within two sub-regions of Field L2: zone L2a was significantly more activated compared to the adjacent sub-region L2b. Conclusions Based on our results we suggest that unlike nuclei in the song system, sub-regions in the primary auditory pallium do not show selectivity for the BOS, but appear to show different levels of activity with exposure to any sound according to their place in the auditory processing stream. PMID:18781203

  8. Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers.

    PubMed

    Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A; Cao, Jiguo; Nie, Yunlong

    2017-01-01

    Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers' performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning.

  9. Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers

    PubMed Central

    Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A.; Cao, Jiguo; Nie, Yunlong

    2017-01-01

    Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers’ performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning. PMID:29255435

  10. Design of Training Systems, Phase II-A Report. An Educational Technology Assessment Model (ETAM)

    DTIC Science & Technology

    1975-07-01

    34format" for the perceptual tasks. This is applicable to auditory as well as visual tasks. Student Participation in Learning Route. When a student enters...skill formats Skill training 05.05 Vehicle properties Instructional functions: Type of stimulus presented to student visual auditory ...Subtask 05.05. For example, a trainer to identify and interpret auditory signals would not be represented in the above list. Trainers in the vehicle

  11. Auditory working memory predicts individual differences in absolute pitch learning.

    PubMed

    Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C

    2015-07-01

    Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the phenomenon of AP are also discussed. Copyright © 2015. Published by Elsevier B.V.

  12. Binding of Verbal and Spatial Features in Auditory Working Memory

    ERIC Educational Resources Information Center

    Maybery, Murray T.; Clissa, Peter J.; Parmentier, Fabrice B. R.; Leung, Doris; Harsa, Grefin; Fox, Allison M.; Jones, Dylan M.

    2009-01-01

    The present study investigated the binding of verbal identity and spatial location in the retention of sequences of spatially distributed acoustic stimuli. Study stimuli varying in verbal content and spatial location (e.g. V[subscript 1]S[subscript 1], V[subscript 2]S[subscript 2], V[subscript 3]S[subscript 3], V[subscript 4]S[subscript 4]) were…

  13. Linking prenatal experience to the emerging musical mind.

    PubMed

    Ullal-Gupta, Sangeeta; Vanden Bosch der Nederlanden, Christina M; Tichko, Parker; Lahav, Amir; Hannon, Erin E

    2013-09-03

    The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one's culture begins already within the mother's womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind.

  14. Auditory processing disorders and problems with hearing-aid fitting in old age.

    PubMed

    Antonelli, A R

    1978-01-01

    The hearing handicap experienced by elderly subjects depends only partially on end-organ impairment. Not only the neural unit loss along the central auditory pathways contributes to decreased speech discrimination, but also learning processes are slowed down. Diotic listening in elderly people seems to fasten learning of discrimination in critical conditions, as in the case of sensitized speech. This fact, and the binaural gain through the binaural release from masking, stress the superiority, on theoretical grounds, of binaural over monaural hearing-aid fitting.

  15. Plasticity in the adult human auditory brainstem following short-term linguistic training

    PubMed Central

    Song, Judy H.; Skoe, Erika; Wong, Patrick C. M.; Kraus, Nina

    2009-01-01

    Peripheral and central structures along the auditory pathway contribute to speech processing and learning. However, because speech requires the use of functionally and acoustically complex sounds which necessitates high sensory and cognitive demands, long-term exposure and experience using these sounds is often attributed to the neocortex with little emphasis placed on subcortical structures. The present study examines changes in the auditory brainstem, specifically the frequency following response (FFR), as native English-speaking adults learn to incorporate foreign speech sounds (lexical pitch patterns) in word identification. The FFR presumably originates from the auditory midbrain, and can be elicited pre-attentively. We measured FFRs to the trained pitch patterns before and after training. Measures of pitch-tracking were then derived from the FFR signals. We found increased accuracy in pitch-tracking after training, including a decrease in the number of pitch-tracking errors and a refinement in the energy devoted to encoding pitch. Most interestingly, this change in pitch-tracking accuracy only occurred in the most acoustically complex pitch contour (dipping contour), which is also the least familiar to our English-speaking subjects. These results not only demonstrate the contribution of the brainstem in language learning and its plasticity in adulthood, but they also demonstrate the specificity of this contribution (i.e., changes in encoding only occurs in specific, least familiar stimuli, not all stimuli). Our findings complement existing data showing cortical changes after second language learning, and are consistent with models suggesting that brainstem changes resulting from perceptual learning are most apparent when acuity in encoding is most needed. PMID:18370594

  16. Sequencing Stories in Spanish and English.

    ERIC Educational Resources Information Center

    Steckbeck, Pamela Meza

    The guide was designed for speech pathologists, bilingual teachers, and specialists in English as a second language who work with Spanish-speaking children. The guide contains twenty illustrated stories that facilitate the learning of auditory sequencing, auditory and visual memory, receptive and expressive vocabulary, and expressive language…

  17. Processing Problems and Language Impairment in Children.

    ERIC Educational Resources Information Center

    Watkins, Ruth V.

    1990-01-01

    The article reviews studies on the assessment of rapid auditory processing abilities. Issues in auditory processing research are identified including a link between otitis media with effusion and language learning problems. A theory that linguistically impaired children experience difficulty in perceiving and processing low phonetic substance…

  18. Dopaminergic Contributions to Vocal Learning

    PubMed Central

    Hoffmann, Lukas A.; Saravanan, Varun; Wood, Alynda N.; He, Li

    2016-01-01

    Although the brain relies on auditory information to calibrate vocal behavior, the neural substrates of vocal learning remain unclear. Here we demonstrate that lesions of the dopaminergic inputs to a basal ganglia nucleus in a songbird species (Bengalese finches, Lonchura striata var. domestica) greatly reduced the magnitude of vocal learning driven by disruptive auditory feedback in a negative reinforcement task. These lesions produced no measureable effects on the quality of vocal performance or the amount of song produced. Our results suggest that dopaminergic inputs to the basal ganglia selectively mediate reinforcement-driven vocal plasticity. In contrast, dopaminergic lesions produced no measurable effects on the birds' ability to restore song acoustics to baseline following the cessation of reinforcement training, suggesting that different forms of vocal plasticity may use different neural mechanisms. SIGNIFICANCE STATEMENT During skill learning, the brain relies on sensory feedback to improve motor performance. However, the neural basis of sensorimotor learning is poorly understood. Here, we investigate the role of the neurotransmitter dopamine in regulating vocal learning in the Bengalese finch, a songbird with an extremely precise singing behavior that can nevertheless be reshaped dramatically by auditory feedback. Our findings show that reduction of dopamine inputs to a region of the songbird basal ganglia greatly impairs vocal learning but has no detectable effect on vocal performance. These results suggest a specific role for dopamine in regulating vocal plasticity. PMID:26888928

  19. Cross-modal prediction changes the timing of conscious access during the motion-induced blindness.

    PubMed

    Chang, Acer Y C; Kanai, Ryota; Seth, Anil K

    2015-01-01

    Despite accumulating evidence that perceptual predictions influence perceptual content, the relations between these predictions and conscious contents remain unclear, especially for cross-modal predictions. We examined whether predictions of visual events by auditory cues can facilitate conscious access to the visual stimuli. We trained participants to learn associations between auditory cues and colour changes. We then asked whether congruency between auditory cues and target colours would speed access to consciousness. We did this by rendering a visual target subjectively invisible using motion-induced blindness and then gradually changing its colour while presenting congruent or incongruent auditory cues. Results showed that the visual target gained access to consciousness faster in congruent than in incongruent trials; control experiments excluded potentially confounding effects of attention and motor response. The expectation effect was gradually established over blocks suggesting a role for extensive training. Overall, our findings show that predictions learned through cross-modal training can facilitate conscious access to visual stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Composing alarms: considering the musical aspects of auditory alarm design.

    PubMed

    Gillard, Jessica; Schutz, Michael

    2016-12-01

    Short melodies are commonly linked to referents in jingles, ringtones, movie themes, and even auditory displays (i.e., sounds used in human-computer interactions). While melody associations can be quite effective, auditory alarms in medical devices are generally poorly learned and highly confused. Here, we draw on approaches and stimuli from both music cognition (melody recognition) and human factors (alarm design) to analyze the patterns of confusions in a paired-associate alarm-learning task involving both a standardized melodic alarm set (Experiment 1) and a set of novel melodies (Experiment 2). Although contour played a role in confusions (consistent with previous research), we observed several cases where melodies with similar contours were rarely confused - melodies holding musically distinctive features. This exploratory work suggests that salient features formed by an alarm's melodic structure (such as repeated notes, distinct contours, and easily recognizable intervals) can increase the likelihood of correct alarm identification. We conclude that the use of musical principles and features may help future efforts to improve the design of auditory alarms.

  1. Neural time course of visually enhanced echo suppression.

    PubMed

    Bishop, Christopher W; London, Sam; Miller, Lee M

    2012-10-01

    Auditory spatial perception plays a critical role in day-to-day communication. For instance, listeners utilize acoustic spatial information to segregate individual talkers into distinct auditory "streams" to improve speech intelligibility. However, spatial localization is an exceedingly difficult task in everyday listening environments with numerous distracting echoes from nearby surfaces, such as walls. Listeners' brains overcome this unique challenge by relying on acoustic timing and, quite surprisingly, visual spatial information to suppress short-latency (1-10 ms) echoes through a process known as "the precedence effect" or "echo suppression." In the present study, we employed electroencephalography (EEG) to investigate the neural time course of echo suppression both with and without the aid of coincident visual stimulation in human listeners. We find that echo suppression is a multistage process initialized during the auditory N1 (70-100 ms) and followed by space-specific suppression mechanisms from 150 to 250 ms. Additionally, we find a robust correlate of listeners' spatial perception (i.e., suppressing or not suppressing the echo) over central electrode sites from 300 to 500 ms. Contrary to our hypothesis, vision's powerful contribution to echo suppression occurs late in processing (250-400 ms), suggesting that vision contributes primarily during late sensory or decision making processes. Together, our findings support growing evidence that echo suppression is a slow, progressive mechanism modifiable by visual influences during late sensory and decision making stages. Furthermore, our findings suggest that audiovisual interactions are not limited to early, sensory-level modulations but extend well into late stages of cortical processing.

  2. Inhibitory Network Interactions Shape the Auditory Processing of Natural Communication Signals in the Songbird Auditory Forebrain

    PubMed Central

    Pinaud, Raphael; Terleph, Thomas A.; Tremere, Liisa A.; Phan, Mimi L.; Dagostin, André A.; Leão, Ricardo M.; Mello, Claudio V.; Vicario, David S.

    2008-01-01

    The role of GABA in the central processing of complex auditory signals is not fully understood. We have studied the involvement of GABAA-mediated inhibition in the processing of birdsong, a learned vocal communication signal requiring intact hearing for its development and maintenance. We focused on caudomedial nidopallium (NCM), an area analogous to parts of the mammalian auditory cortex with selective responses to birdsong. We present evidence that GABAA-mediated inhibition plays a pronounced role in NCM's auditory processing of birdsong. Using immunocytochemistry, we show that approximately half of NCM's neurons are GABAergic. Whole cell patch-clamp recordings in a slice preparation demonstrate that, at rest, spontaneously active GABAergic synapses inhibit excitatory inputs onto NCM neurons via GABAA receptors. Multi-electrode electrophysiological recordings in awake birds show that local blockade of GABAA-mediated inhibition in NCM markedly affects the temporal pattern of song-evoked responses in NCM without modifications in frequency tuning. Surprisingly, this blockade increases the phasic and largely suppresses the tonic response component, reflecting dynamic relationships of inhibitory networks that could include disinhibition. Thus processing of learned natural communication sounds in songbirds, and possibly other vocal learners, may depend on complex interactions of inhibitory networks. PMID:18480371

  3. Mapping perception to action in piano practice: a longitudinal DC-EEG study

    PubMed Central

    Bangert, Marc; Altenmüller, Eckart O

    2003-01-01

    Background Performing music requires fast auditory and motor processing. Regarding professional musicians, recent brain imaging studies have demonstrated that auditory stimulation produces a co-activation of motor areas, whereas silent tapping of musical phrases evokes a co-activation in auditory regions. Whether this is obtained via a specific cerebral relay station is unclear. Furthermore, the time course of plasticity has not yet been addressed. Results Changes in cortical activation patterns (DC-EEG potentials) induced by short (20 minute) and long term (5 week) piano learning were investigated during auditory and motoric tasks. Two beginner groups were trained. The 'map' group was allowed to learn the standard piano key-to-pitch map. For the 'no-map' group, random assignment of keys to tones prevented such a map. Auditory-sensorimotor EEG co-activity occurred within only 20 minutes. The effect was enhanced after 5-week training, contributing elements of both perception and action to the mental representation of the instrument. The 'map' group demonstrated significant additional activity of right anterior regions. Conclusion We conclude that musical training triggers instant plasticity in the cortex, and that right-hemispheric anterior areas provide an audio-motor interface for the mental representation of the keyboard. PMID:14575529

  4. Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.

    PubMed

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events).

  5. Effects of training and motivation on auditory P300 brain-computer interface performance.

    PubMed

    Baykara, E; Ruf, C A; Fioravanti, C; Käthner, I; Simon, N; Kleih, S C; Kübler, A; Halder, S

    2016-01-01

    Brain-computer interface (BCI) technology aims at helping end-users with severe motor paralysis to communicate with their environment without using the natural output pathways of the brain. For end-users in complete paralysis, loss of gaze control may necessitate non-visual BCI systems. The present study investigated the effect of training on performance with an auditory P300 multi-class speller paradigm. For half of the participants, spatial cues were added to the auditory stimuli to see whether performance can be further optimized. The influence of motivation, mood and workload on performance and P300 component was also examined. In five sessions, 16 healthy participants were instructed to spell several words by attending to animal sounds representing the rows and columns of a 5 × 5 letter matrix. 81% of the participants achieved an average online accuracy of ⩾ 70%. From the first to the fifth session information transfer rates increased from 3.72 bits/min to 5.63 bits/min. Motivation significantly influenced P300 amplitude and online ITR. No significant facilitative effect of spatial cues on performance was observed. Training improves performance in an auditory BCI paradigm. Motivation influences performance and P300 amplitude. The described auditory BCI system may help end-users to communicate independently of gaze control with their environment. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Perceptual integration of faces and voices depends on the interaction of emotional content and spatial frequency.

    PubMed

    Kokinous, Jenny; Tavano, Alessandro; Kotz, Sonja A; Schröger, Erich

    2017-02-01

    The role of spatial frequencies (SF) is highly debated in emotion perception, but previous work suggests the importance of low SFs for detecting emotion in faces. Furthermore, emotion perception essentially relies on the rapid integration of multimodal information from faces and voices. We used EEG to test the functional relevance of SFs in the integration of emotional and non-emotional audiovisual stimuli. While viewing dynamic face-voice pairs, participants were asked to identify auditory interjections, and the electroencephalogram (EEG) was recorded. Audiovisual integration was measured as auditory facilitation, indexed by the extent of the auditory N1 amplitude suppression in audiovisual compared to an auditory only condition. We found an interaction of SF filtering and emotion in the auditory response suppression. For neutral faces, larger N1 suppression ensued in the unfiltered and high SF conditions as compared to the low SF condition. Angry face perception led to a larger N1 suppression in the low SF condition. While the results for the neural faces indicate that perceptual quality in terms of SF content plays a major role in audiovisual integration, the results for angry faces suggest that early multisensory integration of emotional information favors low SF neural processing pathways, overruling the predictive value of the visual signal per se. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Memory functions of children born with asymmetric intrauterine growth restriction.

    PubMed

    Geva, Ronny; Eshel, Rina; Leitner, Yael; Fattal-Valevski, Aviva; Harel, Shaul

    2006-10-30

    Learning difficulties are frequently diagnosed in children born with intrauterine growth restriction (IUGR). Models of various animal species with IUGR were studied and demonstrated specific susceptibility and alterations of the hippocampal formation and its related neural structures. The main purpose was to study memory functions of children born with asymmetric IUGR in a large-scale cohort using a long-term prospective paradigm. One hundred and ten infants diagnosed with IUGR were followed-up from birth to 9 years of age. Their performance was compared with a group of 63 children with comparable gestational age and multiple socioeconomic factors. Memory functions (short-term, super- and long-term spans) for different stimuli types (verbal and visual) were evaluated using Visual Auditory Digit Span tasks (VADS), Rey Auditory Verbal Learning Test (Rey-AVLT), and Rey Osterrieth Complex Figure Test (ROCF). Children with IUGR had short-term memory difficulties that hindered both serial verbal processing system and simultaneous processing of high-load visuo-spatial stimuli. The difficulties were not related to prematurity, neonatal complications or growth catch-up, but were augmented by lower maternal education. Recognition skills and benefits from reiteration, typically affected by hippocampal dysfunction, were preserved in both groups. Memory profile of children born with IUGR is characterized primarily by a short-term memory deficit that does not necessarily comply with a typical hippocampal deficit, but rather may reflect an executive short-term memory deficit characteristic of anterior hippocampal-prefrontal network. Implications for cognitive intervention are discussed.

  8. The Influence of Eye Closure on Somatosensory Discrimination: A Trade-off Between Simple Perception and Discrimination.

    PubMed

    Götz, Theresa; Hanke, David; Huonker, Ralph; Weiss, Thomas; Klingner, Carsten; Brodoehl, Stefan; Baumbach, Philipp; Witte, Otto W

    2017-06-01

    We often close our eyes to improve perception. Recent results have shown a decrease of perception thresholds accompanied by an increase in somatosensory activity after eye closure. However, does somatosensory spatial discrimination also benefit from eye closure? We previously showed that spatial discrimination is accompanied by a reduction of somatosensory activity. Using magnetoencephalography, we analyzed the magnitude of primary somatosensory (somatosensory P50m) and primary auditory activity (auditory P50m) during a one-back discrimination task in 21 healthy volunteers. In complete darkness, participants were requested to pay attention to either the somatosensory or auditory stimulation and asked to open or close their eyes every 6.5 min. Somatosensory P50m was reduced during a task requiring the distinguishing of stimulus location changes at the distal phalanges of different fingers. The somatosensory P50m was further reduced and detection performance was higher during eyes open. A similar reduction was found for the auditory P50m during a task requiring the distinguishing of changing tones. The function of eye closure is more than controlling visual input. It might be advantageous for perception because it is an effective way to reduce interference from other modalities, but disadvantageous for spatial discrimination because it requires at least one top-down processing stage. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Can an aircraft be piloted via sonification with an acceptable attentional cost? A comparison of blind and sighted pilots.

    PubMed

    Valéry, Benoît; Scannella, Sébastien; Peysakhovich, Vsevolod; Barone, Pascal; Causse, Mickaël

    2017-07-01

    In the aeronautics field, some authors have suggested that an aircraft's attitude sonification could be used by pilots to cope with spatial disorientation situations. Such a system is currently used by blind pilots to control the attitude of their aircraft. However, given the suspected higher auditory attentional capacities of blind people, the possibility for sighted individuals to use this system remains an open question. For example, its introduction may overload the auditory channel, which may in turn alter the responsiveness of pilots to infrequent but critical auditory warnings. In this study, two groups of pilots (blind versus sighted) performed a simulated flight experiment consisting of successive aircraft maneuvers, on the sole basis of an aircraft sonification. Maneuver difficulty was varied while we assessed flight performance along with subjective and electroencephalographic (EEG) measures of workload. The results showed that both groups of participants reached target-attitudes with a good accuracy. However, more complex maneuvers increased subjective workload and impaired brain responsiveness toward unexpected auditory stimuli as demonstrated by lower N1 and P3 amplitudes. Despite that the EEG signal showed a clear reorganization of the brain in the blind participants (higher alpha power), the brain responsiveness to unexpected auditory stimuli was not significantly different between the two groups. The results suggest that an auditory display might provide useful additional information to spatially disoriented pilots with normal vision. However, its use should be restricted to critical situations and simple recovery or guidance maneuvers. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Preattentive representation of feature conjunctions for concurrent spatially distributed auditory objects.

    PubMed

    Takegata, Rika; Brattico, Elvira; Tervaniemi, Mari; Varyagina, Olga; Näätänen, Risto; Winkler, István

    2005-09-01

    The role of attention in conjoining features of an object has been a topic of much debate. Studies using the mismatch negativity (MMN), an index of detecting acoustic deviance, suggested that the conjunctions of auditory features are preattentively represented in the brain. These studies, however, used sequentially presented sounds and thus are not directly comparable with visual studies of feature integration. Therefore, the current study presented an array of spatially distributed sounds to determine whether the auditory features of concurrent sounds are correctly conjoined without focal attention directed to the sounds. Two types of sounds differing from each other in timbre and pitch were repeatedly presented together while subjects were engaged in a visual n-back working-memory task and ignored the sounds. Occasional reversals of the frequent pitch-timbre combinations elicited MMNs of a very similar amplitude and latency irrespective of the task load. This result suggested preattentive integration of auditory features. However, performance in a subsequent target-search task with the same stimuli indicated the occurrence of illusory conjunctions. The discrepancy between the results obtained with and without focal attention suggests that illusory conjunctions may occur during voluntary access to the preattentively encoded object representations.

  11. Decoding auditory spatial and emotional information encoding using multivariate versus univariate techniques.

    PubMed

    Kryklywy, James H; Macpherson, Ewan A; Mitchell, Derek G V

    2018-04-01

    Emotion can have diverse effects on behaviour and perception, modulating function in some circumstances, and sometimes having little effect. Recently, it was identified that part of the heterogeneity of emotional effects could be due to a dissociable representation of emotion in dual pathway models of sensory processing. Our previous fMRI experiment using traditional univariate analyses showed that emotion modulated processing in the auditory 'what' but not 'where' processing pathway. The current study aims to further investigate this dissociation using a more recently emerging multi-voxel pattern analysis searchlight approach. While undergoing fMRI, participants localized sounds of varying emotional content. A searchlight multi-voxel pattern analysis was conducted to identify activity patterns predictive of sound location and/or emotion. Relative to the prior univariate analysis, MVPA indicated larger overlapping spatial and emotional representations of sound within early secondary regions associated with auditory localization. However, consistent with the univariate analysis, these two dimensions were increasingly segregated in late secondary and tertiary regions of the auditory processing streams. These results, while complimentary to our original univariate analyses, highlight the utility of multiple analytic approaches for neuroimaging, particularly for neural processes with known representations dependent on population coding.

  12. Multisensory connections of monkey auditory cerebral cortex

    PubMed Central

    Smiley, John F.; Falchier, Arnaud

    2009-01-01

    Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628

  13. Individual differences in adult foreign language learning: the mediating effect of metalinguistic awareness.

    PubMed

    Brooks, Patricia J; Kempe, Vera

    2013-02-01

    In this study, we sought to identify cognitive predictors of individual differences in adult foreign-language learning and to test whether metalinguistic awareness mediated the observed relationships. Using a miniature language-learning paradigm, adults (N = 77) learned Russian vocabulary and grammar (gender agreement and case marking) over six 1-h sessions, completing tasks that encouraged attention to phrases without explicitly teaching grammatical rules. The participants' ability to describe the Russian gender and case-marking patterns mediated the effects of nonverbal intelligence and auditory sequence learning on grammar learning and generalization. Hence, even under implicit-learning conditions, individual differences stemmed from explicit metalinguistic awareness of the underlying grammar, which, in turn, was linked to nonverbal intelligence and auditory sequence learning. Prior knowledge of languages with grammatical gender (predominantly Spanish) predicted learning of gender agreement. Transfer of knowledge of gender from other languages to Russian was not mediated by awareness, which suggests that transfer operates through an implicit process akin to structural priming.

  14. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    PubMed

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  15. Negative Priming in Free Recall Reconsidered

    ERIC Educational Resources Information Center

    Hanczakowski, Maciej; Beaman, C. Philip; Jones, Dylan M.

    2016-01-01

    Negative priming in free recall is the finding of impaired memory performance when previously ignored auditory distracters become targets of encoding and retrieval. This negative priming has been attributed to an aftereffect of deploying inhibitory mechanisms that serve to suppress auditory distraction and minimize interference with learning and…

  16. Infants Learn Phonotactic Regularities from Brief Auditory Experience.

    ERIC Educational Resources Information Center

    Chambers, Kyle E.; Onishi, Kristine H.; Fisher, Cynthia

    2003-01-01

    Two experiments investigated whether novel phonotactic regularities, not present in English, could be acquired by 16.5-month-olds from brief auditory experience. Subjects listened to consonant-vowel-consonant syllables in which particular consonants were artificially restricted to either initial or final position. Findings in a subsequent…

  17. Incidental learning of sound categories is impaired in developmental dyslexia.

    PubMed

    Gabay, Yafit; Holt, Lori L

    2015-12-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Incidental Learning of Sound Categories is Impaired in Developmental Dyslexia

    PubMed Central

    Gabay, Yafit; Holt, Lori L.

    2015-01-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. PMID:26409017

  19. A comprehensive three-dimensional cortical map of vowel space.

    PubMed

    Scharinger, Mathias; Idsardi, William J; Poe, Samantha

    2011-12-01

    Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral-medial, anterior-posterior, and inferior-superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom-up information but crucially involves featural-phonetic top-down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.

  20. Missing a trick: Auditory load modulates conscious awareness in audition.

    PubMed

    Fairnie, Jake; Moore, Brian C J; Remington, Anna

    2016-07-01

    In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. A possible role for a paralemniscal auditory pathway in the coding of slow temporal information

    PubMed Central

    Abrams, Daniel A.; Nicol, Trent; Zecker, Steven; Kraus, Nina

    2010-01-01

    Low frequency temporal information present in speech is critical for normal perception, however the neural mechanism underlying the differentiation of slow rates in acoustic signals is not known. Data from the rat trigeminal system suggest that the paralemniscal pathway may be specifically tuned to code low-frequency temporal information. We tested whether this phenomenon occurs in the auditory system by measuring the representation of temporal rate in lemniscal and paralemniscal auditory thalamus and cortex in guinea pig. Similar to the trigeminal system, responses measured in auditory thalamus indicate that slow rates are differentially represented in a paralemniscal pathway. In cortex, both lemniscal and paralemniscal neurons indicated sensitivity to slow rates. We speculate that a paralemniscal pathway in the auditory system may be specifically tuned to code low frequency temporal information present in acoustic signals. These data suggest that somatosensory and auditory modalities have parallel sub-cortical pathways that separately process slow rates and the spatial representation of the sensory periphery. PMID:21094680

  2. Thinking about touch facilitates tactile but not auditory processing.

    PubMed

    Anema, Helen A; de Haan, Alyanne M; Gebuis, Titia; Dijkerman, H Chris

    2012-05-01

    Mental imagery is considered to be important for normal conscious experience. It is most frequently investigated in the visual, auditory and motor domain (imagination of movement), while the studies on tactile imagery (imagination of touch) are scarce. The current study investigated the effect of tactile and auditory imagery on the left/right discriminations of tactile and auditory stimuli. In line with our hypothesis, we observed that after tactile imagery, tactile stimuli were responded to faster as compared to auditory stimuli and vice versa. On average, tactile stimuli were responded to faster as compared to auditory stimuli, and stimuli in the imagery condition were on average responded to slower as compared to baseline performance (left/right discrimination without imagery assignment). The former is probably due to the spatial and somatotopic proximity of the fingers receiving the taps and the thumbs performing the response (button press), the latter to a dual task cost. Together, these results provide the first evidence of a behavioural effect of a tactile imagery assignment on the perception of real tactile stimuli.

  3. The Easy-to-Hard Effect in Human (Homo sapiens) and Rat (Rattus norvegicus) Auditory Identification

    PubMed Central

    Liu, Estella H.; Mercado, Eduardo; Church, Barbara A.; Orduña, Itzel

    2009-01-01

    Training exercises can improve perceptual sensitivities. We examined whether progressively training humans and rats to perform a difficult auditory identification task led to larger improvements than extensive training with highly similar sounds (the easy-to-hard effect). Practice improved humans’ ability to distinguish sounds regardless of the training regimen. However, progressively trained participants were more accurate and showed more generalization, despite significantly less training with the stimuli that were the most difficult to distinguish. Rats showed less capacity to improve with practice, but still benefited from progressive training. These findings indicate that transitioning from an easier to a more difficult task during training can facilitate, and in some cases may be essential for, auditory perceptual learning. The results are not predicted by an explanation that assumes interaction of generalized excitation and inhibition, but are consistent with a hierarchical account of perceptual learning in which the representational precision required to distinguish stimuli determines the mechanisms engaged during learning. PMID:18489229

  4. How the songbird brain listens to its own songs

    NASA Astrophysics Data System (ADS)

    Hahnloser, Richard

    2010-03-01

    Songbirds are capable of vocal learning and communication and are ideally suited to the study of neural mechanisms of auditory feedback processing. When a songbird is deafened in the early sensorimotor phase after tutoring, it fails to imitate the song of its tutor and develops a highly aberrant song. It is also known that birds are capable of storing a long-term memory of tutor song and that they need intact auditory feedback to match their own vocalizations to the tutor's song. Based on these behavioral observations, we investigate feedback processing in single auditory forebrain neurons of juvenile zebra finches that are in a late developmental stage of song learning. We implant birds with miniature motorized microdrives that allow us to record the electrical activity of single neurons while birds are freely moving and singing in their cages. Occasionally, we deliver a brief sound through a loudspeaker to perturb the auditory feedback the bird experiences during singing. These acoustic perturbations of auditory feedback reveal complex sensitivity that cannot be predicted from passive playback responses. Some neurons are highly feedback sensitive in that they respond vigorously to song perturbations, but not to unperturbed songs or perturbed playback. These findings suggest that a computational function of forebrain auditory areas may be to detect errors between actual feedback and mirrored feedback deriving from an internal model of the bird's own song or that of its tutor.

  5. Understanding the neurophysiological basis of auditory abilities for social communication: a perspective on the value of ethological paradigms.

    PubMed

    Bennur, Sharath; Tsunada, Joji; Cohen, Yale E; Liu, Robert C

    2013-11-01

    Acoustic communication between animals requires them to detect, discriminate, and categorize conspecific or heterospecific vocalizations in their natural environment. Laboratory studies of the auditory-processing abilities that facilitate these tasks have typically employed a broad range of acoustic stimuli, ranging from natural sounds like vocalizations to "artificial" sounds like pure tones and noise bursts. However, even when using vocalizations, laboratory studies often test abilities like categorization in relatively artificial contexts. Consequently, it is not clear whether neural and behavioral correlates of these tasks (1) reflect extensive operant training, which drives plastic changes in auditory pathways, or (2) the innate capacity of the animal and its auditory system. Here, we review a number of recent studies, which suggest that adopting more ethological paradigms utilizing natural communication contexts are scientifically important for elucidating how the auditory system normally processes and learns communication sounds. Additionally, since learning the meaning of communication sounds generally involves social interactions that engage neuromodulatory systems differently than laboratory-based conditioning paradigms, we argue that scientists need to pursue more ethological approaches to more fully inform our understanding of how the auditory system is engaged during acoustic communication. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Learning Style Preferences of Southeast Asian Students.

    ERIC Educational Resources Information Center

    Park, Clara C.

    2000-01-01

    Investigated the perceptual learning style preferences (auditory, visual, kinesthetic, and tactile) and preferences for group and individual learning of Southeast Asian students compared to white students. Surveys indicated significant differences in learning style preferences between Southeast Asian and white students and between the diverse…

  7. Development of Mobile Accessible Pedestrian Signals (MAPS) for Blind Pedestrians at Signalized Intersections

    DOT National Transportation Integrated Search

    2011-06-01

    People with vision impairment have different perception and spatial cognition as compared to the sighted people. Blind pedestrians primarily rely on auditory, olfactory, or tactile feedback to determine spatial location and find their way. They gener...

  8. Auditory reafferences: the influence of real-time feedback on movement control.

    PubMed

    Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus

    2015-01-01

    Auditory reafferences are real-time auditory products created by a person's own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action-perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  9. Effects of musicality and motivational orientation on auditory category learning: a test of a regulatory-fit hypothesis.

    PubMed

    McAuley, J Devin; Henry, Molly J; Wedd, Alan; Pleskac, Timothy J; Cesario, Joseph

    2012-02-01

    Two experiments investigated the effects of musicality and motivational orientation on auditory category learning. In both experiments, participants learned to classify tone stimuli that varied in frequency and duration according to an initially unknown disjunctive rule; feedback involved gaining points for correct responses (a gains reward structure) or losing points for incorrect responses (a losses reward structure). For Experiment 1, participants were told at the start that musicians typically outperform nonmusicians on the task, and then they were asked to identify themselves as either a "musician" or a "nonmusician." For Experiment 2, participants were given either a promotion focus prime (a performance-based opportunity to gain entry into a raffle) or a prevention focus prime (a performance-based criterion that needed to be maintained to avoid losing an entry into a raffle) at the start of the experiment. Consistent with a regulatory-fit hypothesis, self-identified musicians and promotion-primed participants given a gains reward structure made more correct tone classifications and were more likely to discover the optimal disjunctive rule than were musicians and promotion-primed participants experiencing losses. Reward structure (gains vs. losses) had inconsistent effects on the performance of nonmusicians, and a weaker regulatory-fit effect was found for the prevention focus prime. Overall, the findings from this study demonstrate a regulatory-fit effect in the domain of auditory category learning and show that motivational orientation may contribute to musician performance advantages in auditory perception.

  10. An Assistive Computerized Learning Environment for Distance Learning Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Klemes, Joel; Epstein, Alit; Zuker, Michal; Grinberg, Nira; Ilovitch, Tamar

    2006-01-01

    The current study examines how a computerized learning environment assists students with learning disabilities (LD) enrolled in a distance learning course at the Open University of Israel. The technology provides computer display of the text, synchronized with auditory output and accompanied by additional computerized study skill tools which…

  11. Head-Up Auditory Displays for Traffic Collision Avoidance System Advisories: A Preliminary Investigation

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1993-01-01

    The advantage of a head-up auditory display was evaluated in a preliminary experiment designed to measure and compare the acquisition time for capturing visual targets under two auditory conditions: standard one-earpiece presentation and two-earpiece three-dimensional (3D) audio presentation. Twelve commercial airline crews were tested under full mission simulation conditions at the NASA-Ames Man-Vehicle Systems Research Facility advanced concepts flight simulator. Scenario software generated visual targets corresponding to aircraft that would activate a traffic collision avoidance system (TCAS) aural advisory; the spatial auditory position was linked to the visual position with 3D audio presentation. Results showed that crew members using a 3D auditory display acquired targets approximately 2.2 s faster than did crew members who used one-earpiece head- sets, but there was no significant difference in the number of targets acquired.

  12. Auditory Technology and Its Impact on Bilingual Deaf Education

    ERIC Educational Resources Information Center

    Mertes, Jennifer

    2015-01-01

    Brain imaging studies suggest that children can simultaneously develop, learn, and use two languages. A visual language, such as American Sign Language (ASL), facilitates development at the earliest possible moments in a child's life. Spoken language development can be delayed due to diagnostic evaluations, device fittings, and auditory skill…

  13. Peeling the Onion of Auditory Processing Disorder: A Language/Curricular-Based Perspective

    ERIC Educational Resources Information Center

    Wallach, Geraldine P.

    2011-01-01

    Purpose: This article addresses auditory processing disorder (APD) from a language-based perspective. The author asks speech-language pathologists to evaluate the functionality (or not) of APD as a diagnostic category for children and adolescents with language-learning and academic difficulties. Suggestions are offered from a…

  14. Stimulus-Dependent Flexibility in Non-Human Auditory Pitch Processing

    ERIC Educational Resources Information Center

    Bregman, Micah R.; Patel, Aniruddh D.; Gentner, Timothy Q.

    2012-01-01

    Songbirds and humans share many parallels in vocal learning and auditory sequence processing. However, the two groups differ notably in their abilities to recognize acoustic sequences shifted in absolute pitch (pitch height). Whereas humans maintain accurate recognition of words or melodies over large pitch height changes, songbirds are…

  15. Are Auditory and Visual Processing Deficits Related to Developmental Dyslexia?

    ERIC Educational Resources Information Center

    Georgiou, George K.; Papadopoulos, Timothy C.; Zarouna, Elena; Parrila, Rauno

    2012-01-01

    The purpose of this study was to examine if children with dyslexia learning to read a consistent orthography (Greek) experience auditory and visual processing deficits and if these deficits are associated with phonological awareness, rapid naming speed and orthographic processing. We administered measures of general cognitive ability, phonological…

  16. The Goldilocks Effect in Infant Auditory Attention

    ERIC Educational Resources Information Center

    Kidd, Celeste; Piantadosi, Steven T.; Aslin, Richard N.

    2014-01-01

    Infants must learn about many cognitive domains (e.g., language, music) from auditory statistics, yet capacity limits on their cognitive resources restrict the quantity that they can encode. Previous research has established that infants can attend to only a subset of available acoustic input. Yet few previous studies have directly examined infant…

  17. The Contribution of Brainstem and Cerebellar Pathways to Auditory Recognition

    PubMed Central

    McLachlan, Neil M.; Wilson, Sarah J.

    2017-01-01

    The cerebellum has been known to play an important role in motor functions for many years. More recently its role has been expanded to include a range of cognitive and sensory-motor processes, and substantial neuroimaging and clinical evidence now points to cerebellar involvement in most auditory processing tasks. In particular, an increase in the size of the cerebellum over recent human evolution has been attributed in part to the development of speech. Despite this, the auditory cognition literature has largely overlooked afferent auditory connections to the cerebellum that have been implicated in acoustically conditioned reflexes in animals, and could subserve speech and other auditory processing in humans. This review expands our understanding of auditory processing by incorporating cerebellar pathways into the anatomy and functions of the human auditory system. We reason that plasticity in the cerebellar pathways underpins implicit learning of spectrotemporal information necessary for sound and speech recognition. Once learnt, this information automatically recognizes incoming auditory signals and predicts likely subsequent information based on previous experience. Since sound recognition processes involving the brainstem and cerebellum initiate early in auditory processing, learnt information stored in cerebellar memory templates could then support a range of auditory processing functions such as streaming, habituation, the integration of auditory feature information such as pitch, and the recognition of vocal communications. PMID:28373850

  18. Similarity in Spatial Origin of Information Facilitates Cue Competition and Interference

    ERIC Educational Resources Information Center

    Amundson, Jeffrey C.; Miller, Ralph R.

    2007-01-01

    Two lick suppression studies were conducted with water-deprived rats to investigate the influence of spatial similarity in cue interaction. Experiment 1 assessed the influence of similarity of the spatial origin of competing cues in a blocking procedure. Greater blocking was observed in the condition in which the auditory blocking cue and the…

  19. Foxp2 mutations impair auditory-motor association learning.

    PubMed

    Kurt, Simone; Fisher, Simon E; Ehret, Günter

    2012-01-01

    Heterozygous mutations of the human FOXP2 transcription factor gene cause the best-described examples of monogenic speech and language disorders. Acquisition of proficient spoken language involves auditory-guided vocal learning, a specialized form of sensory-motor association learning. The impact of etiological Foxp2 mutations on learning of auditory-motor associations in mammals has not been determined yet. Here, we directly assess this type of learning using a newly developed conditioned avoidance paradigm in a shuttle-box for mice. We show striking deficits in mice heterozygous for either of two different Foxp2 mutations previously implicated in human speech disorders. Both mutations cause delays in acquiring new motor skills. The magnitude of impairments in association learning, however, depends on the nature of the mutation. Mice with a missense mutation in the DNA-binding domain are able to learn, but at a much slower rate than wild type animals, while mice carrying an early nonsense mutation learn very little. These results are consistent with expression of Foxp2 in distributed circuits of the cortex, striatum and cerebellum that are known to play key roles in acquisition of motor skills and sensory-motor association learning, and suggest differing in vivo effects for distinct variants of the Foxp2 protein. Given the importance of such networks for the acquisition of human spoken language, and the fact that similar mutations in human FOXP2 cause problems with speech development, this work opens up a new perspective on the use of mouse models for understanding pathways underlying speech and language disorders.

  20. Statistical learning of multisensory regularities is enhanced in musicians: An MEG study.

    PubMed

    Paraskevopoulos, Evangelos; Chalas, Nikolas; Kartsidis, Panagiotis; Wollbrink, Andreas; Bamidis, Panagiotis

    2018-07-15

    The present study used magnetoencephalography (MEG) to identify the neural correlates of audiovisual statistical learning, while disentangling the differential contributions of uni- and multi-modal statistical mismatch responses in humans. The applied paradigm was based on a combination of a statistical learning paradigm and a multisensory oddball one, combining an audiovisual, an auditory and a visual stimulation stream, along with the corresponding deviances. Plasticity effects due to musical expertise were investigated by comparing the behavioral and MEG responses of musicians to non-musicians. The behavioral results indicated that the learning was successful for both musicians and non-musicians. The unimodal MEG responses are consistent with previous studies, revealing the contribution of Heschl's gyrus for the identification of auditory statistical mismatches and the contribution of medial temporal and visual association areas for the visual modality. The cortical network underlying audiovisual statistical learning was found to be partly common and partly distinct from the corresponding unimodal networks, comprising right temporal and left inferior frontal sources. Musicians showed enhanced activation in superior temporal and superior frontal gyrus. Connectivity and information processing flow amongst the sources comprising the cortical network of audiovisual statistical learning, as estimated by transfer entropy, was reorganized in musicians, indicating enhanced top-down processing. This neuroplastic effect showed a cross-modal stability between the auditory and audiovisual modalities. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Beethoven's Last Piano Sonata and Those Who Follow Crocodiles: Cross-Domain Mappings of Auditory Pitch in a Musical Context

    ERIC Educational Resources Information Center

    Eitan, Zohar; Timmers, Renee

    2010-01-01

    Though auditory pitch is customarily mapped in Western cultures onto spatial verticality (high-low), both anthropological reports and cognitive studies suggest that pitch may be mapped onto a wide variety of other domains. We collected a total number of 35 pitch mappings and investigated in four experiments how these mappings are used and…

  2. Near-Independent Capacities and Highly Constrained Output Orders in the Simultaneous Free Recall of Auditory-Verbal and Visuo-Spatial Stimuli

    ERIC Educational Resources Information Center

    Cortis Mack, Cathleen; Dent, Kevin; Ward, Geoff

    2018-01-01

    Three experiments examined the immediate free recall (IFR) of auditory-verbal and visuospatial materials from single-modality and dual-modality lists. In Experiment 1, we presented participants with between 1 and 16 spoken words, with between 1 and 16 visuospatial dot locations, or with between 1 and 16 words "and" dots with synchronized…

  3. Development of auditory sensory memory from 2 to 6 years: an MMN study.

    PubMed

    Glass, Elisabeth; Sachse, Steffi; von Suchodoletz, Waldemar

    2008-08-01

    Short-term storage of auditory information is thought to be a precondition for cognitive development, and deficits in short-term memory are believed to underlie learning disabilities and specific language disorders. We examined the development of the duration of auditory sensory memory in normally developing children between the ages of 2 and 6 years. To probe the lifetime of auditory sensory memory we elicited the mismatch negativity (MMN), a component of the late auditory evoked potential, with tone stimuli of two different frequencies presented with various interstimulus intervals between 500 and 5,000 ms. Our findings suggest that memory traces for tone characteristics have a duration of 1-2 s in 2- and 3-year-old children, more than 2 s in 4-year-olds and 3-5 s in 6-year-olds. The results provide insights into the maturational processes involved in auditory sensory memory during the sensitive period of cognitive development.

  4. Neural responses in songbird forebrain reflect learning rates, acquired salience, and stimulus novelty after auditory discrimination training.

    PubMed

    Bell, Brittany A; Phan, Mimi L; Vicario, David S

    2015-03-01

    How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions. Copyright © 2015 the American Physiological Society.

  5. Hearing Shapes: Event-related Potentials Reveal the Time Course of Auditory-Visual Sensory Substitution.

    PubMed

    Graulty, Christian; Papaioannou, Orestis; Bauer, Phoebe; Pitts, Michael A; Canseco-Gonzalez, Enriqueta

    2018-04-01

    In auditory-visual sensory substitution, visual information (e.g., shape) can be extracted through strictly auditory input (e.g., soundscapes). Previous studies have shown that image-to-sound conversions that follow simple rules [such as the Meijer algorithm; Meijer, P. B. L. An experimental system for auditory image representation. Transactions on Biomedical Engineering, 39, 111-121, 1992] are highly intuitive and rapidly learned by both blind and sighted individuals. A number of recent fMRI studies have begun to explore the neuroplastic changes that result from sensory substitution training. However, the time course of cross-sensory information transfer in sensory substitution is largely unexplored and may offer insights into the underlying neural mechanisms. In this study, we recorded ERPs to soundscapes before and after sighted participants were trained with the Meijer algorithm. We compared these posttraining versus pretraining ERP differences with those of a control group who received the same set of 80 auditory/visual stimuli but with arbitrary pairings during training. Our behavioral results confirmed the rapid acquisition of cross-sensory mappings, and the group trained with the Meijer algorithm was able to generalize their learning to novel soundscapes at impressive levels of accuracy. The ERP results revealed an early cross-sensory learning effect (150-210 msec) that was significantly enhanced in the algorithm-trained group compared with the control group as well as a later difference (420-480 msec) that was unique to the algorithm-trained group. These ERP modulations are consistent with previous fMRI results and provide additional insight into the time course of cross-sensory information transfer in sensory substitution.

  6. Inter-individual differences in how presentation modality affects verbal learning performance in children aged 5 to 16.

    PubMed

    Meijs, Celeste; Hurks, Petra P M; Wassenberg, Renske; Feron, Frans J M; Jolles, Jelle

    2016-01-01

    This study examines inter-individual differences in how presentation modality affects verbal learning performance. Children aged 5 to 16 performed a verbal learning test within one of three presentation modalities: pictorial, auditory, or textual. The results indicated that a beneficial effect of pictures exists over auditory and textual presentation modalities and that this effect increases with age. However, this effect is only found if the information to be learned is presented once (or at most twice) and only in children above the age of 7. The results may be explained in terms of single or dual coding of information in which the phonological loop is involved. Development of the (sub)vocal rehearsal system in the phonological loop is believed to be a gradual process that begins developing around the age of 7. The developmental trajectories are similar for boys and girls. Additionally, auditory information and textual information both seemed to be processed in a similar manner, namely without labeling or recoding, leading to single coding. In contrast, pictures are assumed to be processed by the dual coding of both the visual information and a (verbal) labeling of the pictures.

  7. A pilot study of working memory and academic achievement in college students with ADHD.

    PubMed

    Gropper, Rachel J; Tannock, Rosemary

    2009-05-01

    To investigate working memory (WM), academic achievement, and their relationship in university students with attention-deficit/hyperactivity disorder (ADHD). Participants were university students with previously confirmed diagnoses of ADHD (n = 16) and normal control (NC) students (n = 30). Participants completed 3 auditory-verbal WM measures, 2 visual-spatial WM measures, and 1 control executive function task. Also, they self-reported grade point averages (GPAs) based on university courses. The ADHD group displayed significant weaknesses on auditory-verbal WM tasks and 1 visual-spatial task. They also showed a nonsignificant trend for lower GPAs. Within the entire sample, there was a significant relationship between GPA and auditory-verbal WM. WM impairments are evident in a subgroup of the ADHD population attending university. WM abilities are linked with, and thus may compromise, academic attainment. Parents and physicians are advised to counsel university-bound students with ADHD to contact the university accessibility services to provide them with academic guidance.

  8. Does Gender Influence Learning Style Preferences of First-Year Medical Students?

    ERIC Educational Resources Information Center

    Slater, Jill A.; Lujan, Heidi L.; DiCarlo, Stephen E.

    2007-01-01

    Students have specific learning style preferences, and these preferences may be different between male and female students. Understanding a student's learning style preference is an important consideration when designing classroom instruction. Therefore, we administered the visual, auditory, reading/writing, kinesthetic (VARK) learning preferences…

  9. Implicit Statistical Learning and Language Skills in Bilingual Children

    ERIC Educational Resources Information Center

    Yim, Dongsun; Rudoy, John

    2013-01-01

    Purpose: Implicit statistical learning in 2 nonlinguistic domains (visual and auditory) was used to investigate (a) whether linguistic experience influences the underlying learning mechanism and (b) whether there are modality constraints in predicting implicit statistical learning with age and language skills. Method: Implicit statistical learning…

  10. Otitis Media: Effect on a Child's Learning.

    ERIC Educational Resources Information Center

    Gdowski, Becky S.; And Others

    1986-01-01

    The paper reviews the relationship between otitis media, auditory processing, language, and learning development. Suggestions are provided for identifying and managing students with suspected histories of the condition. (CL)

  11. Use of rhythm in acquisition of a computer-generated tracking task.

    PubMed

    Fulop, A C; Kirby, R H; Coates, G D

    1992-08-01

    This research assessed whether rhythm aids acquisition of motor skills by providing cues for the timing of those skills. Rhythms were presented to participants visually or visually with auditory cues. It was hypothesized that the auditory cues would facilitate recognition and learning of the rhythms. The three timing principles of rhythms were also explored. It was hypothesized that rhythms that satisfied all three timing principles would be more beneficial in learning a skill than rhythms that did not satisfy the principles. Three groups learned three different rhythms by practicing a tracking task. After training, participants attempted to reproduce the tracks from memory. Results suggest that rhythms do help in learning motor skills but different sets of timing principles explain perception of rhythm in different modalities.

  12. Sensorimotor nucleus NIf is necessary for auditory processing but not vocal motor output in the avian song system.

    PubMed

    Cardin, Jessica A; Raksin, Jonathan N; Schmidt, Marc F

    2005-04-01

    Sensorimotor integration in the avian song system is crucial for both learning and maintenance of song, a vocal motor behavior. Although a number of song system areas demonstrate both sensory and motor characteristics, their exact roles in auditory and premotor processing are unclear. In particular, it is unknown whether input from the forebrain nucleus interface of the nidopallium (NIf), which exhibits both sensory and premotor activity, is necessary for both auditory and premotor processing in its target, HVC. Here we show that bilateral NIf lesions result in long-term loss of HVC auditory activity but do not impair song production. NIf is thus a major source of auditory input to HVC, but an intact NIf is not necessary for motor output in adult zebra finches.

  13. Effect of Long-Term Sodium Salicylate Administration on Learning, Memory, and Neurogenesis in the Rat Hippocampus

    PubMed Central

    Niu, Haichen; Ding, Sheng; Li, Haiying; Wei, Jianfeng; Ren, Chao; Wu, Xiujuan

    2018-01-01

    Tinnitus is thought to be caused by damage to the auditory and nonauditory system due to exposure to loud noise, aging, or other etiologies. However, at present, the exact neurophysiological basis of chronic tinnitus remains unknown. To explore whether the function of the limbic system is disturbed in tinnitus, the hippocampus was selected, which plays a vital role in learning and memory. The hippocampal function was examined with a learning and memory procedure. For this purpose, sodium salicylate (NaSal) was used to create a rat animal model of tinnitus, evaluated with prepulse inhibition behavior (PPI). The acquisition and retrieval abilities of spatial memory were measured using the Morris water maze (MWM) in NaSal-treated and control animals, followed by observation of c-Fos and delta-FosB protein expression in the hippocampal field by immunohistochemistry. To further identify the neural substrate for memory change in tinnitus, neurogenesis in the subgranular zone of the dentate gyrus (DG) was compared between the NaSal group and the control group. The results showed that acquisition and retrieval of spatial memory were impaired by NaSal treatment. The expression of c-Fos and delta-FosB protein was also inhibited in NaSal-treated animals. Simultaneously, neurogenesis in the DG was also impaired in tinnitus animals. In general, our data suggest that the hippocampal system (limbic system) may play a key role in tinnitus pathology.

  14. Task relevance modulates the behavioural and neural effects of sensory predictions

    PubMed Central

    Friston, Karl J.; Nobre, Anna C.

    2017-01-01

    The brain is thought to generate internal predictions to optimize behaviour. However, it is unclear whether predictions signalling is an automatic brain function or depends on task demands. Here, we manipulated the spatial/temporal predictability of visual targets, and the relevance of spatial/temporal information provided by auditory cues. We used magnetoencephalography (MEG) to measure participants’ brain activity during task performance. Task relevance modulated the influence of predictions on behaviour: spatial/temporal predictability improved spatial/temporal discrimination accuracy, but not vice versa. To explain these effects, we used behavioural responses to estimate subjective predictions under an ideal-observer model. Model-based time-series of predictions and prediction errors (PEs) were associated with dissociable neural responses: predictions correlated with cue-induced beta-band activity in auditory regions and alpha-band activity in visual regions, while stimulus-bound PEs correlated with gamma-band activity in posterior regions. Crucially, task relevance modulated these spectral correlates, suggesting that current goals influence PE and prediction signalling. PMID:29206225

  15. Spatial band-pass filtering aids decoding musical genres from auditory cortex 7T fMRI.

    PubMed

    Sengupta, Ayan; Pollmann, Stefan; Hanke, Michael

    2018-01-01

    Spatial filtering strategies, combined with multivariate decoding analysis of BOLD images, have been used to investigate the nature of the neural signal underlying the discriminability of brain activity patterns evoked by sensory stimulation -- primarily in the visual cortex. Reported evidence indicates that such signals are spatially broadband in nature, and are not primarily comprised of fine-grained activation patterns. However, it is unclear whether this is a general property of the BOLD signal, or whether it is specific to the details of employed analyses and stimuli. Here we performed an analysis of publicly available, high-resolution 7T fMRI on the response BOLD response to musical genres in primary auditory cortex that matches a previously conducted study on decoding visual orientation from V1.  The results show that the pattern of decoding accuracies with respect to different types and levels of spatial filtering is comparable to that obtained from V1, despite considerable differences in the respective cortical circuitry.

  16. Intercepting a sound without vision

    PubMed Central

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  17. Realigning Thunder and Lightning: Temporal Adaptation to Spatiotemporally Distant Events

    PubMed Central

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants’ SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events). PMID:24391928

  18. Influence of aging on human sound localization

    PubMed Central

    Dobreva, Marina S.; O'Neill, William E.

    2011-01-01

    Errors in sound localization, associated with age-related changes in peripheral and central auditory function, can pose threats to self and others in a commonly encountered environment such as a busy traffic intersection. This study aimed to quantify the accuracy and precision (repeatability) of free-field human sound localization as a function of advancing age. Head-fixed young, middle-aged, and elderly listeners localized band-passed targets using visually guided manual laser pointing in a darkened room. Targets were presented in the frontal field by a robotically controlled loudspeaker assembly hidden behind a screen. Broadband targets (0.1–20 kHz) activated all auditory spatial channels, whereas low-pass and high-pass targets selectively isolated interaural time and intensity difference cues (ITDs and IIDs) for azimuth and high-frequency spectral cues for elevation. In addition, to assess the upper frequency limit of ITD utilization across age groups more thoroughly, narrowband targets were presented at 250-Hz intervals from 250 Hz up to ∼2 kHz. Young subjects generally showed horizontal overestimation (overshoot) and vertical underestimation (undershoot) of auditory target location, and this effect varied with frequency band. Accuracy and/or precision worsened in older individuals for broadband, high-pass, and low-pass targets, reflective of peripheral but also central auditory aging. In addition, compared with young adults, middle-aged, and elderly listeners showed pronounced horizontal localization deficiencies (imprecision) for narrowband targets within 1,250–1,575 Hz, congruent with age-related central decline in auditory temporal processing. Findings underscore the distinct neural processing of the auditory spatial cues in sound localization and their selective deterioration with advancing age. PMID:21368004

  19. White-matter structural connectivity predicts short-term melody and rhythm learning in non-musicians.

    PubMed

    Vaquero, Lucía; Ramos-Escobar, Neus; François, Clément; Penhune, Virginia; Rodríguez-Fornells, Antoni

    2018-06-18

    Music learning has received increasing attention in the last decades due to the variety of functions and brain plasticity effects involved during its practice. Most previous reports interpreted the differences between music experts and laymen as the result of training. However, recent investigations suggest that these differences are due to a combination of genetic predispositions with the effect of music training. Here, we tested the relationship of the dorsal auditory-motor pathway with individual behavioural differences in short-term music learning. We gathered structural neuroimaging data from 44 healthy non-musicians (28 females) before they performed a rhythm- and a melody-learning task during a single behavioural session, and manually dissected the arcuate fasciculus (AF) in both hemispheres. The macro- and microstructural organization of the AF (i.e., volume and FA) predicted the learning rate and learning speed in the musical tasks, but only in the right hemisphere. Specifically, the volume of the right anterior segment predicted the synchronization improvement during the rhythm task, the FA in the right long segment was correlated with the learning rate in the melody task, and the volume and FA of the right whole AF predicted the learning speed during the melody task. This is the first study finding a specific relation between different branches within the AF and rhythmic and melodic materials. Our results support the relevant function of the AF as the structural correlate of both auditory-motor transformations and the feedback-feedforward loop, and suggest a crucial involvement of the anterior segment in error-monitoring processes related to auditory-motor learning. These findings have implications for both the neuroscience of music field and second-language learning investigations. Copyright © 2018. Published by Elsevier Inc.

  20. Effectiveness of Personalised Learning Paths on Students Learning Experiences in an e-Learning Environment

    ERIC Educational Resources Information Center

    Santally, Mohammad Issack; Senteni, Alain

    2013-01-01

    Personalisation of e-learning environments is an interesting research area in which the learning experience of learners is generally believed to be improved when his or her personal learning preferences are taken into account. One such learning preference is the V-A-K instrument that classifies learners as visual, auditory or kinaesthetic. In this…

Top