Sample records for spatial auditory display

  1. Effects of in-vehicle warning information displays with or without spatial compatibility on driving behaviors and response performance.

    PubMed

    Liu, Yung-Ching; Jhuang, Jing-Wun

    2012-07-01

    A driving simulator study was conducted to evaluate the effects of five in-vehicle warning information displays upon drivers' emergent response and decision performance. These displays include visual display, auditory displays with and without spatial compatibility, hybrid displays in both visual and auditory format with and without spatial compatibility. Thirty volunteer drivers were recruited to perform various tasks that involved driving, stimulus-response, divided attention and stress rating. Results show that for displays of single-modality, drivers benefited more when coping with visual display of warning information than auditory display with or without spatial compatibility. However, auditory display with spatial compatibility significantly improved drivers' performance in reacting to the divided attention task and making accurate S-R task decision. Drivers' best performance results were obtained for hybrid display with spatial compatibility. Hybrid displays enabled drivers to respond the fastest and achieve the best accuracy in both S-R and divided attention tasks. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  2. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1994-01-01

    A spatial auditory display was designed for separating the multiple communication channels usually heard over one ear to different virtual auditory positions. The single 19 foot rack mount device utilizes digital filtering algorithms to separate up to four communication channels. The filters use four different binaural transfer functions, synthesized from actual outer ear measurements, to impose localization cues on the incoming sound. Hardware design features include 'fail-safe' operation in the case of power loss, and microphone/headset interfaces to the mobile launch communication system in use at KSC. An experiment designed to verify the intelligibility advantage of the display used 130 different call signs taken from the communications protocol used at NASA KSC. A 6 to 7 dB intelligibility advantage was found when multiple channels were spatially displayed, compared to monaural listening. The findings suggest that the use of a spatial auditory display could enhance both occupational and operational safety and efficiency of NASA operations.

  3. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1993-01-01

    A spatial auditory display was used to convolve speech stimuli, consisting of 130 different call signs used in the communications protocol of NASA's John F. Kennedy Space Center, to different virtual auditory positions. An adaptive staircase method was used to determine intelligibility levels of the signal against diotic speech babble, with spatial positions at 30 deg azimuth increments. Non-individualized, minimum-phase approximations of head-related transfer functions were used. The results showed a maximal intelligibility improvement of about 6 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  4. Technical aspects of a demonstration tape for three-dimensional sound displays

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1990-01-01

    This document was developed to accompany an audio cassette that demonstrates work in three-dimensional auditory displays, developed at the Ames Research Center Aerospace Human Factors Division. It provides a text version of the audio material, and covers the theoretical and technical issues of spatial auditory displays in greater depth than on the cassette. The technical procedures used in the production of the audio demonstration are documented, including the methods for simulating rotorcraft radio communication, synthesizing auditory icons, and using the Convolvotron, a real-time spatialization device.

  5. Spatialized audio improves call sign recognition during multi-aircraft control.

    PubMed

    Kim, Sungbin; Miller, Michael E; Rusnock, Christina F; Elshaw, John J

    2018-07-01

    We investigated the impact of a spatialized audio display on response time, workload, and accuracy while monitoring auditory information for relevance. The human ability to differentiate sound direction implies that spatial audio may be used to encode information. Therefore, it is hypothesized that spatial audio cues can be applied to aid differentiation of critical versus noncritical verbal auditory information. We used a human performance model and a laboratory study involving 24 participants to examine the effect of applying a notional, automated parser to present audio in a particular ear depending on information relevance. Operator workload and performance were assessed while subjects listened for and responded to relevant audio cues associated with critical information among additional noncritical information. Encoding relevance through spatial location in a spatial audio display system--as opposed to monophonic, binaural presentation--significantly reduced response time and workload, particularly for noncritical information. Future auditory displays employing spatial cues to indicate relevance have the potential to reduce workload and improve operator performance in similar task domains. Furthermore, these displays have the potential to reduce the dependence of workload and performance on the number of audio cues. Published by Elsevier Ltd.

  6. The role of spatiotemporal and spectral cues in segregating short sound events: evidence from auditory Ternus display.

    PubMed

    Wang, Qingcui; Bao, Ming; Chen, Lihan

    2014-01-01

    Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three 'within-frame intervals' (WFIs, or intervals between A and B, and between B and C), seven 'inter-frame intervals' (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound 'AB' to 'BC'). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the 'peripheral channeling' theory.

  7. A virtual display system for conveying three-dimensional acoustic information

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Wightman, Frederic L.; Foster, Scott H.

    1988-01-01

    The development of a three-dimensional auditory display system is discussed. Theories of human sound localization and techniques for synthesizing various features of auditory spatial perceptions are examined. Psychophysical data validating the system are presented. The human factors applications of the system are considered.

  8. Multichannel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Erbe, T.; Wenzel, E. M. (Principal Investigator)

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  9. Multi-channel spatial auditory display for speech communications

    NASA Astrophysics Data System (ADS)

    Begault, Durand; Erbe, Tom

    1993-10-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  10. Multichannel spatial auditory display for speech communications.

    PubMed

    Begault, D R; Erbe, T

    1994-10-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  11. Multichannel Spatial Auditory Display for Speed Communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Erbe, Tom

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplifiedhead-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degree azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degree azimuth positions.

  12. Multi-channel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand; Erbe, Tom

    1993-01-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  13. Aurally aided visual search performance in a dynamic environment

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

    2008-04-01

    Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

  14. Auditory spatial representations of the world are compressed in blind humans.

    PubMed

    Kolarik, Andrew J; Pardhan, Shahina; Cirstea, Silvia; Moore, Brian C J

    2017-02-01

    Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.

  15. Head-Up Auditory Displays for Traffic Collision Avoidance System Advisories: A Preliminary Investigation

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1993-01-01

    The advantage of a head-up auditory display was evaluated in a preliminary experiment designed to measure and compare the acquisition time for capturing visual targets under two auditory conditions: standard one-earpiece presentation and two-earpiece three-dimensional (3D) audio presentation. Twelve commercial airline crews were tested under full mission simulation conditions at the NASA-Ames Man-Vehicle Systems Research Facility advanced concepts flight simulator. Scenario software generated visual targets corresponding to aircraft that would activate a traffic collision avoidance system (TCAS) aural advisory; the spatial auditory position was linked to the visual position with 3D audio presentation. Results showed that crew members using a 3D auditory display acquired targets approximately 2.2 s faster than did crew members who used one-earpiece head- sets, but there was no significant difference in the number of targets acquired.

  16. Non-visual spatial tasks reveal increased interactions with stance postural control.

    PubMed

    Woollacott, Marjorie; Vander Velde, Timothy

    2008-05-07

    The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.

  17. Spatial Disorientation Training - Demonstration and Avoidance (entrainement a la desorientation spatiale - Demonstration et reponse)

    DTIC Science & Technology

    2008-10-01

    INTRODUCTION RTO-TR-HFM-118 1 - 7 1.2.1.2 Acoustic HUD A three-dimensional auditory display presents sound from arbitrary directions spanning a...through the visual channel, use of the auditory channel shortens reaction times and is expected to reduce pilot workload, thus improving the overall...of vestibular stimulation using a rotating chair or suitable disorientation device to provide each student with a personal experience of some of

  18. Concurrent 3-D sonifications enable the head-up monitoring of two interrelated aircraft navigation instruments.

    PubMed

    Towers, John; Burgess-Limerick, Robin; Riek, Stephan

    2014-12-01

    The aim of this study was to enable the head-up monitoring of two interrelated aircraft navigation instruments by developing a 3-D auditory display that encodes this navigation information within two spatially discrete sonifications. Head-up monitoring of aircraft navigation information utilizing 3-D audio displays, particularly involving concurrently presented sonifications, requires additional research. A flight simulator's head-down waypoint bearing and course deviation instrument readouts were conveyed to participants via a 3-D auditory display. Both readouts were separately represented by a colocated pair of continuous sounds, one fixed and the other varying in pitch, which together encoded the instrument value's deviation from the norm. Each sound pair's position in the listening space indicated the left/right parameter of its instrument's readout. Participants' accuracy in navigating a predetermined flight plan was evaluated while performing a head-up task involving the detection of visual flares in the out-of-cockpit scene. The auditory display significantly improved aircraft heading and course deviation accuracy, head-up time, and flare detections. Head tracking did not improve performance by providing participants with the ability to orient potentially conflicting sounds, suggesting that the use of integrated localizing cues was successful. Conclusion: A supplementary 3-D auditory display enabled effective head-up monitoring of interrelated navigation information normally attended to through a head-down display. Pilots operating aircraft, such as helicopters and unmanned aerial vehicles, may benefit from a supplementary auditory display because they navigate in two dimensions while performing head-up, out-of-aircraft, visual tasks.

  19. Converging Modalities Ground Abstract Categories: The Case of Politics

    PubMed Central

    Farias, Ana Rita; Garrido, Margarida V.; Semin, Gün R.

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal. PMID:23593360

  20. Converging modalities ground abstract categories: the case of politics.

    PubMed

    Farias, Ana Rita; Garrido, Margarida V; Semin, Gün R

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.

  1. Virtual acoustic displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.

    1991-01-01

    A 3D auditory display can potentially enhance information transfer by combining directional and iconic information in a quite naturalistic representation of dynamic objects in the interface. Another aspect of auditory spatial clues is that, in conjunction with other modalities, it can act as a potentiator of information in the display. For example, visual and auditory cues together can reinforce the information content of the display and provide a greater sense of presence or realism in a manner not readily achievable by either modality alone. This phenomenon will be particularly useful in telepresence applications, such as advanced teleconferencing environments, shared electronic workspaces, and monitoring telerobotic activities in remote or hazardous situations. Thus, the combination of direct spatial cues with good principles of iconic design could provide an extremely powerful and information-rich display which is also quite easy to use. An alternative approach, recently developed at ARC, generates externalized, 3D sound cues over headphones in realtime using digital signal processing. Here, the synthesis technique involves the digital generation of stimuli using Head-Related Transfer Functions (HRTF's) measured in the two ear-canals of individual subjects. Other similar approaches include an analog system developed by Loomis, et. al., (1990) and digital systems which make use of transforms derived from normative mannikins and simulations of room acoustics. Such an interface also requires the careful psychophysical evaluation of listener's ability to accurately localize the virtual or synthetic sound sources. From an applied standpoint, measurement of each potential listener's HRTF's may not be possible in practice. For experienced listeners, localization performance was only slightly degraded compared to a subject's inherent ability. Alternatively, even inexperienced listeners may be able to adapt to a particular set of HRTF's as long as they provide adequate cues for localization. In general, these data suggest that most listeners can obtain useful directional information from an auditory display without requiring the use of individually-tailored HRTF's.

  2. Mate choice in the eye and ear of the beholder? Female multimodal sensory configuration influences her preferences.

    PubMed

    Ronald, Kelly L; Fernández-Juricic, Esteban; Lucas, Jeffrey R

    2018-05-16

    A common assumption in sexual selection studies is that receivers decode signal information similarly. However, receivers may vary in how they rank signallers if signal perception varies with an individual's sensory configuration. Furthermore, receivers may vary in their weighting of different elements of multimodal signals based on their sensory configuration. This could lead to complex levels of selection on signalling traits. We tested whether multimodal sensory configuration could affect preferences for multimodal signals. We used brown-headed cowbird ( Molothrus ater ) females to examine how auditory sensitivity and auditory filters, which influence auditory spectral and temporal resolution, affect song preferences, and how visual spatial resolution and visual temporal resolution, which influence resolution of a moving visual signal, affect visual display preferences. Our results show that multimodal sensory configuration significantly affects preferences for male displays: females with better auditory temporal resolution preferred songs that were shorter, with lower Wiener entropy, and higher frequency; and females with better visual temporal resolution preferred males with less intense visual displays. Our findings provide new insights into mate-choice decisions and receiver signal processing. Furthermore, our results challenge a long-standing assumption in animal communication which can affect how we address honest signalling, assortative mating and sensory drive. © 2018 The Author(s).

  3. Evidence for enhanced discrimination of virtual auditory distance among blind listeners using level and direct-to-reverberant cues.

    PubMed

    Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina

    2013-02-01

    Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.

  4. Auditory Spatial Attention Representations in the Human Cerebral Cortex

    PubMed Central

    Kong, Lingqiang; Michalka, Samantha W.; Rosen, Maya L.; Sheremata, Summer L.; Swisher, Jascha D.; Shinn-Cunningham, Barbara G.; Somers, David C.

    2014-01-01

    Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753

  5. Intelligibility of speech in a virtual 3-D environment.

    PubMed

    MacDonald, Justin A; Balakrishnan, J D; Orosz, Michael D; Karplus, Walter J

    2002-01-01

    In a simulated air traffic control task, improvement in the detection of auditory warnings when using virtual 3-D audio depended on the spatial configuration of the sounds. Performance improved substantially when two of four sources were placed to the left and the remaining two were placed to the right of the participant. Surprisingly, little or no benefits were observed for configurations involving the elevation or transverse (front/back) dimensions of virtual space, suggesting that position on the interaural (left/right) axis is the crucial factor to consider in auditory display design. The relative importance of interaural spacing effects was corroborated in a second, free-field (real space) experiment. Two additional experiments showed that (a) positioning signals to the side of the listener is superior to placing them in front even when two sounds are presented in the same location, and (b) the optimal distance on the interaural axis varies with the amplitude of the sounds. These results are well predicted by the behavior of an ideal observer under the different display conditions. This suggests that guidelines for auditory display design that allow for effective perception of speech information can be developed from an analysis of the physical sound patterns.

  6. Auditory and visual interactions between the superior and inferior colliculi in the ferret.

    PubMed

    Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K

    2015-05-01

    The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Auditory connections and functions of prefrontal cortex

    PubMed Central

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  8. Pilot Errors Involving Head-Up Displays (HUDs), Helmet-Mounted Displays (HMDs), and Night Vision Goggles (NVGs)

    DTIC Science & Technology

    1992-01-01

    results in stimulation of spatial-motion-location visual processes, which are known to take precedence over any other sensor or cognitive stimuli. In...or version he is flying. This was initially an observation that stimulated the birth of the human-factors engineering discipline during World War H...collisions with the surface, the pilot needs inputs to sensory channels other than the focal visual system. Properly designed auditory and

  9. A pilot study of working memory and academic achievement in college students with ADHD.

    PubMed

    Gropper, Rachel J; Tannock, Rosemary

    2009-05-01

    To investigate working memory (WM), academic achievement, and their relationship in university students with attention-deficit/hyperactivity disorder (ADHD). Participants were university students with previously confirmed diagnoses of ADHD (n = 16) and normal control (NC) students (n = 30). Participants completed 3 auditory-verbal WM measures, 2 visual-spatial WM measures, and 1 control executive function task. Also, they self-reported grade point averages (GPAs) based on university courses. The ADHD group displayed significant weaknesses on auditory-verbal WM tasks and 1 visual-spatial task. They also showed a nonsignificant trend for lower GPAs. Within the entire sample, there was a significant relationship between GPA and auditory-verbal WM. WM impairments are evident in a subgroup of the ADHD population attending university. WM abilities are linked with, and thus may compromise, academic attainment. Parents and physicians are advised to counsel university-bound students with ADHD to contact the university accessibility services to provide them with academic guidance.

  10. Auditory-visual integration modulates location-specific repetition suppression of auditory responses.

    PubMed

    Shrem, Talia; Murray, Micah M; Deouell, Leon Y

    2017-11-01

    Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.

  11. An Introduction to 3-D Sound

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    This talk will overview the basic technologies related to the creation of virtual acoustic images, and the potential of including spatial auditory displays in human-machine interfaces. Research into the perceptual error inherent in both natural and virtual spatial hearing is reviewed, since the formation of improved technologies is tied to psychoacoustic research. This includes a discussion of Head Related Transfer Function (HRTF) measurement techniques (the HRTF provides important perceptual cues within a virtual acoustic display). Many commercial applications of virtual acoustics have so far focused on games and entertainment ; in this review, other types of applications are examined, including aeronautic safety, voice communications, virtual reality, and room acoustic simulation. In particular, the notion that realistic simulation is optimized within a virtual acoustic display when head motion and reverberation cues are included within a perceptual model.

  12. Human Engineer’s Guide to Auditory Displays. Volume 2. Elements of Signal Reception and Resolution Affecting Auditory Displays.

    DTIC Science & Technology

    1984-08-01

    90de It noce..etrv wnd identify by block numberl .’-- This work reviews the areas of monaural and binaural signal detection, auditory discrimination And...AUDITORY DISPLAYS This work reviews the areas of monaural and binaural signal detection, auditory discrimination and localization, and reaction times to...pertaining to the major areas of auditory processing in humans. The areas covered in the reviews presented here are monaural and binaural siqnal detection

  13. Human engineer's guide to auditory displays. Volume 1. Elements of perception and memory affecting auditory displays

    NASA Astrophysics Data System (ADS)

    Mulligan, B. E.; Goodman, L. S.; McBride, D. K.; Mitchell, T. M.; Crosby, T. N.

    1984-08-01

    This work reviews the areas of auditory attention, recognition, memory and auditory perception of patterns, pitch, and loudness. The review was written from the perspective of human engineering and focuses primarily on auditory processing of information contained in acoustic signals. The impetus for this effort was to establish a data base to be utilized in the design and evaluation of acoustic displays.

  14. Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.

    PubMed

    Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard

    2018-01-01

    The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.

  15. Auditory spatial processing in Alzheimer’s disease

    PubMed Central

    Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer’s disease syndromic spectrum. PMID:25468732

  16. Auditory, visual, and bimodal data link displays and how they support pilot performance.

    PubMed

    Steelman, Kelly S; Talleur, Donald; Carbonari, Ronald; Yamani, Yusuke; Nunes, Ashley; McCarley, Jason S

    2013-06-01

    The design of data link messaging systems to ensure optimal pilot performance requires empirical guidance. The current study examined the effects of display format (auditory, visual, or bimodal) and visual display position (adjacent to instrument panel or mounted on console) on pilot performance. Subjects performed five 20-min simulated single-pilot flights. During each flight, subjects received messages from a simulated air traffic controller. Messages were delivered visually, auditorily, or bimodally. Subjects were asked to read back each message aloud and then perform the instructed maneuver. Visual and bimodal displays engendered lower subjective workload and better altitude tracking than auditory displays. Readback times were shorter with the two unimodal visual formats than with any of the other three formats. Advantages for the unimodal visual format ranged in size from 2.8 s to 3.8 s relative to the bimodal upper left and auditory formats, respectively. Auditory displays allowed slightly more head-up time (3 to 3.5 seconds per minute) than either visual or bimodal displays. Position of the visual display had only modest effects on any measure. Combined with the results from previous studies by Helleberg and Wickens and Lancaster and Casali the current data favor visual and bimodal displays over auditory displays; unimodal auditory displays were favored by only one measure, head-up time, and only very modestly. Data evinced no statistically significant effects of visual display position on performance, suggesting that, contrary to expectations, the placement of a visual data link display may be of relatively little consequence to performance.

  17. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability

    PubMed Central

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-01-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  18. Frequency encoded auditory display of the critical tracking task

    NASA Technical Reports Server (NTRS)

    Stevenson, J.

    1984-01-01

    The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.

  19. Can an aircraft be piloted via sonification with an acceptable attentional cost? A comparison of blind and sighted pilots.

    PubMed

    Valéry, Benoît; Scannella, Sébastien; Peysakhovich, Vsevolod; Barone, Pascal; Causse, Mickaël

    2017-07-01

    In the aeronautics field, some authors have suggested that an aircraft's attitude sonification could be used by pilots to cope with spatial disorientation situations. Such a system is currently used by blind pilots to control the attitude of their aircraft. However, given the suspected higher auditory attentional capacities of blind people, the possibility for sighted individuals to use this system remains an open question. For example, its introduction may overload the auditory channel, which may in turn alter the responsiveness of pilots to infrequent but critical auditory warnings. In this study, two groups of pilots (blind versus sighted) performed a simulated flight experiment consisting of successive aircraft maneuvers, on the sole basis of an aircraft sonification. Maneuver difficulty was varied while we assessed flight performance along with subjective and electroencephalographic (EEG) measures of workload. The results showed that both groups of participants reached target-attitudes with a good accuracy. However, more complex maneuvers increased subjective workload and impaired brain responsiveness toward unexpected auditory stimuli as demonstrated by lower N1 and P3 amplitudes. Despite that the EEG signal showed a clear reorganization of the brain in the blind participants (higher alpha power), the brain responsiveness to unexpected auditory stimuli was not significantly different between the two groups. The results suggest that an auditory display might provide useful additional information to spatially disoriented pilots with normal vision. However, its use should be restricted to critical situations and simple recovery or guidance maneuvers. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Modulation frequency as a cue for auditory speed perception.

    PubMed

    Senna, Irene; Parise, Cesare V; Ernst, Marc O

    2017-07-12

    Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).

  1. Utility of an airframe referenced spatial auditory display for general aviation operations

    NASA Astrophysics Data System (ADS)

    Naqvi, M. Hassan; Wigdahl, Alan J.; Ranaudo, Richard J.

    2009-05-01

    The University of Tennessee Space Institute (UTSI) completed flight testing with an airframe-referenced localized audio cueing display. The purpose was to assess its affect on pilot performance, workload, and situational awareness in two scenarios simulating single-pilot general aviation operations under instrument meteorological conditions. Each scenario consisted of 12 test procedures conducted under simulated instrument meteorological conditions, half with the cue off, and half with the cue on. Simulated aircraft malfunctions were strategically inserted at critical times during each test procedure. Ten pilots participated in the study; half flew a moderate workload scenario consisting of point to point navigation and holding pattern operations and half flew a high workload scenario consisting of non precision approaches and missed approach procedures. Flight data consisted of aircraft and navigation state parameters, NASA Task Load Index (TLX) assessments, and post-flight questionnaires. With localized cues there was slightly better pilot technical performance, a reduction in workload, and a perceived improvement in situational awareness. Results indicate that an airframe-referenced auditory display has utility and pilot acceptance in general aviation operations.

  2. The relative impact of generic head-related transfer functions on auditory speech thresholds: implications for the design of three-dimensional audio displays.

    PubMed

    Arrabito, G R; McFadden, S M; Crabtree, R B

    2001-07-01

    Auditory speech thresholds were measured in this study. Subjects were required to discriminate a female voice recording of three-digit numbers in the presence of diotic speech babble. The voice stimulus was spatialized at 11 static azimuth positions on the horizontal plane using three different head-related transfer functions (HRTFs) measured on individuals who did not participate in this study. The diotic presentation of the voice stimulus served as the control condition. The results showed that two of the HRTFS performed similarly and had significantly lower auditory speech thresholds than the third HRTF. All three HRTFs yielded significantly lower auditory speech thresholds compared with the diotic presentation of the voice stimulus, with the largest difference at 60 degrees azimuth. The practical implications of these results suggest that lower headphone levels of the communication system in military aircraft can be achieved without sacrificing intelligibility, thereby lessening the risk of hearing loss.

  3. Spatial auditory processing in pinnipeds

    NASA Astrophysics Data System (ADS)

    Holt, Marla M.

    Given the biological importance of sound for a variety of activities, pinnipeds must be able to obtain spatial information about their surroundings thorough acoustic input in the absence of other sensory cues. The three chapters of this dissertation address spatial auditory processing capabilities of pinnipeds in air given that these amphibious animals use acoustic signals for reproduction and survival on land. Two chapters are comparative lab-based studies that utilized psychophysical approaches conducted in an acoustic chamber. Chapter 1 addressed the frequency-dependent sound localization abilities at azimuth of three pinniped species (the harbor seal, Phoca vitulina, the California sea lion, Zalophus californianus, and the northern elephant seal, Mirounga angustirostris). While performances of the sea lion and harbor seal were consistent with the duplex theory of sound localization, the elephant seal, a low-frequency hearing specialist, showed a decreased ability to localize the highest frequencies tested. In Chapter 2 spatial release from masking (SRM), which occurs when a signal and masker are spatially separated resulting in improvement in signal detectability relative to conditions in which they are co-located, was determined in a harbor seal and sea lion. Absolute and masked thresholds were measured at three frequencies and azimuths to determine the detection advantages afforded by this type of spatial auditory processing. Results showed that hearing sensitivity was enhanced by up to 19 and 12 dB in the harbor seal and sea lion, respectively, when the signal and masker were spatially separated. Chapter 3 was a field-based study that quantified both sender and receiver variables of the directional properties of male northern elephant seal calls produce within communication system that serves to delineate dominance status. This included measuring call directivity patterns, observing male-male vocally-mediated interactions, and an acoustic playback study. Results showed that males produce calls that were highly directional that together with social status influenced the response of receivers. Results from the playback study were able to confirm that the isolated acoustic components of this display resulted in similar responses among males. These three chapters provide further information about comparative aspects of spatial auditory processing in pinnipeds.

  4. Behavioral and Brain Measures of Phasic Alerting Effects on Visual Attention.

    PubMed

    Wiegand, Iris; Petersen, Anders; Finke, Kathrin; Bundesen, Claus; Lansner, Jon; Habekost, Thomas

    2017-01-01

    In the present study, we investigated effects of phasic alerting on visual attention in a partial report task, in which half of the displays were preceded by an auditory warning cue. Based on the computational Theory of Visual Attention (TVA), we estimated parameters of spatial and non-spatial aspects of visual attention and measured event-related lateralizations (ERLs) over visual processing areas. We found that the TVA parameter sensory effectiveness a , which is thought to reflect visual processing capacity, significantly increased with phasic alerting. By contrast, the distribution of visual processing resources according to task relevance and spatial position, as quantified in parameters top-down control α and spatial bias w index , was not modulated by phasic alerting. On the electrophysiological level, the latencies of ERLs in response to the task displays were reduced following the warning cue. These results suggest that phasic alerting facilitates visual processing in a general, unselective manner and that this effect originates in early stages of visual information processing.

  5. Auditory and visual spatial impression: Recent studies of three auditoria

    NASA Astrophysics Data System (ADS)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  6. Speech Auditory Alerts Promote Memory for Alerted Events in a Video-Simulated Self-Driving Car Ride.

    PubMed

    Nees, Michael A; Helbein, Benji; Porter, Anna

    2016-05-01

    Auditory displays could be essential to helping drivers maintain situation awareness in autonomous vehicles, but to date, few or no studies have examined the effectiveness of different types of auditory displays for this application scenario. Recent advances in the development of autonomous vehicles (i.e., self-driving cars) have suggested that widespread automation of driving may be tenable in the near future. Drivers may be required to monitor the status of automation programs and vehicle conditions as they engage in secondary leisure or work tasks (entertainment, communication, etc.) in autonomous vehicles. An experiment compared memory for alerted events-a component of Level 1 situation awareness-using speech alerts, auditory icons, and a visual control condition during a video-simulated self-driving car ride with a visual secondary task. The alerts gave information about the vehicle's operating status and the driving scenario. Speech alerts resulted in better memory for alerted events. Both auditory display types resulted in less perceived effort devoted toward the study tasks but also greater perceived annoyance with the alerts. Speech auditory displays promoted Level 1 situation awareness during a simulation of a ride in a self-driving vehicle under routine conditions, but annoyance remains a concern with auditory displays. Speech auditory displays showed promise as a means of increasing Level 1 situation awareness of routine scenarios during an autonomous vehicle ride with an unrelated secondary task. © 2016, Human Factors and Ergonomics Society.

  7. Auditory spatial processing in the human cortex.

    PubMed

    Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C

    2012-12-01

    The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.

  8. Visual map and instruction-based bicycle navigation: a comparison of effects on behaviour.

    PubMed

    de Waard, Dick; Westerhuis, Frank; Joling, Danielle; Weiland, Stella; Stadtbäumer, Ronja; Kaltofen, Leonie

    2017-09-01

    Cycling with a classic paper map was compared with navigating with a moving map displayed on a smartphone, and with auditory, and visual turn-by-turn route guidance. Spatial skills were found to be related to navigation performance, however only when navigating from a paper or electronic map, not with turn-by-turn (instruction based) navigation. While navigating, 25% of the time cyclists fixated at the devices that present visual information. Navigating from a paper map required most mental effort and both young and older cyclists preferred electronic over paper map navigation. In particular a turn-by-turn dedicated guidance device was favoured. Visual maps are in particular useful for cyclists with higher spatial skills. Turn-by-turn information is used by all cyclists, and it is useful to make these directions available in all devices. Practitioner Summary: Electronic navigation devices are preferred over a paper map. People with lower spatial skills benefit most from turn-by-turn guidance information, presented either auditory or on a dedicated device. People with higher spatial skills perform well with all devices. It is advised to keep in mind that all users benefit from turn-by-turn information when developing a navigation device for cyclists.

  9. Short-term visual deprivation can enhance spatial release from masking.

    PubMed

    Pagé, Sara; Sharp, Andréanne; Landry, Simon P; Champoux, François

    2016-08-15

    This research aims to study the effect of short-term visual deprivation on spatial release from masking, a major component of the cocktail party effect that allows people to detect an auditory target in noise. The Masking Level Difference (MLD) test was administered on healthy individuals over three sessions: before (I) and after 90min of visual deprivation (II), and after 90min of re-exposure to light (III). A non-deprived control group performed the same tests, but remained sighted between sessions I and II. The non-deprived control group displayed constant results across sessions. However, performance in the MLD test was improved following short-term visual deprivation and performance returned to pre-deprivation values after light re-exposure. This study finds that short-term visual deprivation transiently enhances the spatial release from masking. These data suggest the significant potential for enhancing a process involved in the cocktail party effect in normally developing individuals and adds to an emerging literature on the potential to enhance auditory ability after only a brief period of visual deprivation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Virtual acoustics displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-01-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  11. Virtual acoustics displays

    NASA Astrophysics Data System (ADS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-03-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  12. Auditory motion-specific mechanisms in the primate brain

    PubMed Central

    Baumann, Simon; Dheerendra, Pradeep; Joly, Olivier; Hunter, David; Balezeau, Fabien; Sun, Li; Rees, Adrian; Petkov, Christopher I.; Thiele, Alexander; Griffiths, Timothy D.

    2017-01-01

    This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream. PMID:28472038

  13. Human Consequences of Agile Aircraft (Facteurs humains lies au pilotage des avions de combat tres manoeuvrants)

    DTIC Science & Technology

    2001-05-01

    displays were discussed with the test pilots that we interviewed. The pilots had mixed opinions on tactile and auditory displays. Positive comments...were noted concerning three-dimensional auditory displays, although some stated that the pilot could easily ignore the aural tone. Others complained...recognition technology was not reliable enough and worried about problems with surrounding auditory signals from anti-G straining maneuvers, oxygen

  14. Sonification Prototype for Space Physics

    NASA Astrophysics Data System (ADS)

    Candey, R. M.; Schertenleib, A. M.; Diaz Merced, W. L.

    2005-12-01

    As an alternative and adjunct to visual displays, auditory exploration of data via sonification (data controlled sound) and audification (audible playback of data samples) is promising for complex or rapidly/temporally changing visualizations, for data exploration of large datasets (particularly multi-dimensional datasets), and for exploring datasets in frequency rather than spatial dimensions (see also International Conferences on Auditory Display ). Besides improving data exploration and analysis for most researchers, the use of sound is especially valuable as an assistive technology for visually-impaired people and can make science and math more exciting for high school and college students. Only recently have the hardware and software come together to make a cross-platform open-source sonification tool feasible. We have developed a prototype sonification data analysis tool using the JavaSound API and NASA GSFC's ViSBARD software . Wanda Diaz Merced, a blind astrophysicist from Puerto Rico, is instrumental in advising on and testing the tool.

  15. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  16. Covert Auditory Spatial Orienting: An Evaluation of the Spatial Relevance Hypothesis

    ERIC Educational Resources Information Center

    Roberts, Katherine L.; Summerfield, A. Quentin; Hall, Deborah A.

    2009-01-01

    The spatial relevance hypothesis (J. J. McDonald & L. M. Ward, 1999) proposes that covert auditory spatial orienting can only be beneficial to auditory processing when task stimuli are encoded spatially. We present a series of experiments that evaluate 2 key aspects of the hypothesis: (a) that "reflexive activation of location-sensitive neurons is…

  17. Headphone and Head-Mounted Visual Displays for Virtual Environments

    NASA Technical Reports Server (NTRS)

    Begault, Duran R.; Ellis, Stephen R.; Wenzel, Elizabeth M.; Trejo, Leonard J. (Technical Monitor)

    1998-01-01

    A realistic auditory environment can contribute to both the overall subjective sense of presence in a virtual display, and to a quantitative metric predicting human performance. Here, the role of audio in a virtual display and the importance of auditory-visual interaction are examined. Conjectures are proposed regarding the effectiveness of audio compared to visual information for creating a sensation of immersion, the frame of reference within a virtual display, and the compensation of visual fidelity by supplying auditory information. Future areas of research are outlined for improving simulations of virtual visual and acoustic spaces. This paper will describe some of the intersensory phenomena that arise during operator interaction within combined visual and auditory virtual environments. Conjectures regarding audio-visual interaction will be proposed.

  18. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing

    PubMed Central

    Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias

    2018-01-01

    Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637

  19. Advanced Multimodal Solutions for Information Presentation

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Godfroy-Cooper, Martine

    2018-01-01

    High-workload, fast-paced, and degraded sensory environments are the likeliest candidates to benefit from multimodal information presentation. For example, during EVA (Extra-Vehicular Activity) and telerobotic operations, the sensory restrictions associated with a space environment provide a major challenge to maintaining the situation awareness (SA) required for safe operations. Multimodal displays hold promise to enhance situation awareness and task performance by utilizing different sensory modalities and maximizing their effectiveness based on appropriate interaction between modalities. During EVA, the visual and auditory channels are likely to be the most utilized with tasks such as monitoring the visual environment, attending visual and auditory displays, and maintaining multichannel auditory communications. Previous studies have shown that compared to unimodal displays (spatial auditory or 2D visual), bimodal presentation of information can improve operator performance during simulated extravehicular activity on planetary surfaces for tasks as diverse as orientation, localization or docking, particularly when the visual environment is degraded or workload is increased. Tactile displays offer a third sensory channel that may both offload information processing effort and provide a means to capture attention when urgently required. For example, recent studies suggest that including tactile cues may result in increased orientation and alerting accuracy, improved task response time and decreased workload, as well as provide self-orientation cues in microgravity on the ISS (International Space Station). An important overall issue is that context-dependent factors like task complexity, sensory degradation, peripersonal vs. extrapersonal space operations, workload, experience level, and operator fatigue tend to vary greatly in complex real-world environments and it will be difficult to design a multimodal interface that performs well under all conditions. As a possible solution, adaptive systems have been proposed in which the information presented to the user changes as a function of taskcontext-dependent factors. However, this presupposes that adequate methods for detecting andor predicting such factors are developed. Further, research in adaptive systems for aviation suggests that they can sometimes serve to increase workload and reduce situational awareness. It will be critical to develop multimodal display guidelines that include consideration of smart systems that can select the best display method for a particular contextsituation.The scope of the current work is an analysis of potential multimodal display technologies for long duration missions and, in particular, will focus on their potential role in EVA activities. The review will address multimodal (combined visual, auditory andor tactile) displays investigated by NASA, industry, and DoD (Dept. of Defense). It also considers the need for adaptive information systems to accommodate a variety of operational contexts such as crew status (e.g., fatigue, workload level) and task environment (e.g., EVA, habitat, rover, spacecraft). Current approaches to guidelines and best practices for combining modalities for the most effective information displays are also reviewed. Potential issues in developing interface guidelines for the Exploration Information System (EIS) are briefly considered.

  20. Virtual Acoustics, Aeronautics and Communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An optimal approach to auditory display design for commercial aircraft would utilize both spatialized ("3-D") audio techniques and active noise cancellation for safer operations. Results from several aircraft simulator studies conducted at NASA Ames Research Center are reviewed, including Traffic alert and Collision Avoidance System (TCAS) warnings, spoken orientation "beacons" for gate identification and collision avoidance on the ground, and hardware for improved speech intelligibility. The implications of hearing loss amongst pilots is also considered.

  1. Virtual acoustics, aeronautics, and communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Wenzel, E. M. (Principal Investigator)

    1998-01-01

    An optimal approach to auditory display design for commercial aircraft would utilize both spatialized (3-D) audio techniques and active noise cancellation for safer operations. Results from several aircraft simulator studies conducted at NASA Ames Research Center are reviewed, including Traffic alert and Collision Avoidance System (TCAS) warnings, spoken orientation "beacons" for gate identification and collision avoidance on the ground, and hardware for improved speech intelligibility. The implications of hearing loss among pilots is also considered.

  2. Human engineer's guide to auditory displays. Volume 2: Elements of signal reception and resolution affecting auditory displays

    NASA Astrophysics Data System (ADS)

    Mulligan, B. E.; Goodman, L. S.; McBride, D. K.; Mitchell, T. M.; Crosby, T. N.

    1984-08-01

    This work reviews the areas of monaural and binaural signal detection, auditory discrimination and localization, and reaction times to acoustic signals. The review was written from the perspective of human engineering and focuses primarily on auditory processing of information contained in acoustic signals. The impetus for this effort was to establish a data base to be utilized in the design and evaluation of acoustic displays. Appendix 1 also contains citations of the scientific literature on which was based the answers to each question. There are nineteen questions and answers, and more than two hundred citations contained in the list of references given in Appendix 2. This is one of two related works, the other of which reviewed the literature in the areas of auditory attention, recognition memory, and auditory perception of patterns, pitch, and loudness.

  3. Modulation of auditory stimulus processing by visual spatial or temporal cue: an event-related potentials study.

    PubMed

    Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong

    2013-10-11

    Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Feasibility of and Design Parameters for a Computer-Based Attitudinal Research Information System

    DTIC Science & Technology

    1975-08-01

    Auditory Displays Auditory Evoked Potentials Auditory Feedback Auditory Hallucinations Auditory Localization Auditory Maski ng Auditory Neurons...surprising to hear these prob- lems e:qpressed once again and in the same old refrain. The Navy attitude surveyors were frustrated when they...Audiolcgy Audiometers Aud iometry Audiotapes Audiovisual Communications Media Audiovisual Instruction Auditory Cortex Auditory

  5. The Role of Auditory Cues in the Spatial Knowledge of Blind Individuals

    ERIC Educational Resources Information Center

    Papadopoulos, Konstantinos; Papadimitriou, Kimon; Koutsoklenis, Athanasios

    2012-01-01

    The study presented here sought to explore the role of auditory cues in the spatial knowledge of blind individuals by examining the relation between the perceived auditory cues and the landscape of a given area and by investigating how blind individuals use auditory cues to create cognitive maps. The findings reveal that several auditory cues…

  6. Comparison of Congruence Judgment and Auditory Localization Tasks for Assessing the Spatial Limits of Visual Capture

    PubMed Central

    Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.

    2016-01-01

    Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630

  7. Testing the dual-pathway model for auditory processing in human cortex.

    PubMed

    Zündorf, Ida C; Lewald, Jörg; Karnath, Hans-Otto

    2016-01-01

    Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Early Visual Deprivation Severely Compromises the Auditory Sense of Space in Congenitally Blind Children

    ERIC Educational Resources Information Center

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-01-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind…

  9. Spatial Disorientation in Military Vehicles: Causes, Consequences and Cures (Desorientation spaiale dans les vehicules militaires: causes, consequences et remedes)

    DTIC Science & Technology

    2003-02-01

    servcice warfighters (Training devices and protocols, Onboard equipment, Cognitive and sensorimotor aids, Visual and auditory symbology, Peripheral visual...vestibular stimulation causing a decrease in cerebral blood pressure with the consequent reduction in G-tolerance and increased likelihood of ALOC or GLOC...tactile stimulators (e.g. one providing a sensation of movement) or of displays with a more complex coding (e.g. by increase in the number of tactors, or

  10. Extending and Applying the EPIC Architecture for Human Cognition and Performance: Auditory and Spatial Components

    DTIC Science & Technology

    2016-03-01

    manual rather than verbal responses. The coordinate response measure ( CRM ) task and speech corpus is a highly simplified form of the command and...in multi-talker speech experiments. The CRM corpus is a collection of recorded command utterances in the form of Ready <Callsign> go to <Color...In the two-talker CRM listening task, participants respond to commands by pointing to the appropriate Color/Digit pair on a computer display. A

  11. Proceedings of the Lake Wilderness Attention Conference Held at Seattle Washington, 22-24 September 1980.

    DTIC Science & Technology

    1981-07-10

    Pohlmann, L. D. Some models of observer behavior in two-channel auditory signal detection. Perception and Psychophy- sics, 1973, 14, 101-109. Spelke...spatial), and processing modalities ( auditory versus visual input, vocal versus manual response). If validated, this configuration has both theoretical...conclusion that auditory and visual processes will compete, as will spatial and verbal (albeit to a lesser extent than auditory - auditory , visual-visual

  12. Multivariable manual control with simultaneous visual and auditory presentation of information. [for improved compensatory tracking performance of human operator

    NASA Technical Reports Server (NTRS)

    Uhlemann, H.; Geiser, G.

    1975-01-01

    Multivariable manual compensatory tracking experiments were carried out in order to determine typical strategies of the human operator and conditions for improvement of his performance if one of the visual displays of the tracking errors is supplemented by an auditory feedback. Because the tracking error of the system which is only visually displayed is found to decrease, but not in general that of the auditorally supported system, it was concluded that the auditory feedback unloads the visual system of the operator who can then concentrate on the remaining exclusively visual displays.

  13. Precise auditory-vocal mirroring in neurons for learned vocal communication.

    PubMed

    Prather, J F; Peters, S; Nowicki, S; Mooney, R

    2008-01-17

    Brain mechanisms for communication must establish a correspondence between sensory and motor codes used to represent the signal. One idea is that this correspondence is established at the level of single neurons that are active when the individual performs a particular gesture or observes a similar gesture performed by another individual. Although neurons that display a precise auditory-vocal correspondence could facilitate vocal communication, they have yet to be identified. Here we report that a certain class of neurons in the swamp sparrow forebrain displays a precise auditory-vocal correspondence. We show that these neurons respond in a temporally precise fashion to auditory presentation of certain note sequences in this songbird's repertoire and to similar note sequences in other birds' songs. These neurons display nearly identical patterns of activity when the bird sings the same sequence, and disrupting auditory feedback does not alter this singing-related activity, indicating it is motor in nature. Furthermore, these neurons innervate striatal structures important for song learning, raising the possibility that singing-related activity in these cells is compared to auditory feedback to guide vocal learning.

  14. Distinct Correlation Structure Supporting a Rate-Code for Sound Localization in the Owl’s Auditory Forebrain

    PubMed Central

    2017-01-01

    Abstract While a topographic map of auditory space exists in the vertebrate midbrain, it is absent in the forebrain. Yet, both brain regions are implicated in sound localization. The heterogeneous spatial tuning of adjacent sites in the forebrain compared to the midbrain reflects different underlying circuitries, which is expected to affect the correlation structure, i.e., signal (similarity of tuning) and noise (trial-by-trial variability) correlations. Recent studies have drawn attention to the impact of response correlations on the information readout from a neural population. We thus analyzed the correlation structure in midbrain and forebrain regions of the barn owl’s auditory system. Tetrodes were used to record in the midbrain and two forebrain regions, Field L and the downstream auditory arcopallium (AAr), in anesthetized owls. Nearby neurons in the midbrain showed high signal and noise correlations (RNCs), consistent with shared inputs. As previously reported, Field L was arranged in random clusters of similarly tuned neurons. Interestingly, AAr neurons displayed homogeneous monotonic azimuth tuning, while response variability of nearby neurons was significantly less correlated than the midbrain. Using a decoding approach, we demonstrate that low RNC in AAr restricts the potentially detrimental effect it can have on information, assuming a rate code proposed for mammalian sound localization. This study harnesses the power of correlation structure analysis to investigate the coding of auditory space. Our findings demonstrate distinct correlation structures in the auditory midbrain and forebrain, which would be beneficial for a rate-code framework for sound localization in the nontopographic forebrain representation of auditory space. PMID:28674698

  15. Modality-specificity of Selective Attention Networks.

    PubMed

    Stewart, Hannah J; Amitay, Sygal

    2015-01-01

    To establish the modality specificity and generality of selective attention networks. Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled "general attention." The third component was labeled "auditory attention," as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as "spatial orienting" and "spatial conflict," respectively-they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task-all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific.

  16. Forebrain pathway for auditory space processing in the barn owl.

    PubMed

    Cohen, Y E; Miller, G L; Knudsen, E I

    1998-02-01

    The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.

  17. Auditory and visual capture during focused visual attention.

    PubMed

    Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan

    2009-10-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  18. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    PubMed

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  19. Demonstrations of simple and complex auditory psychophysics for multiple platforms and environments

    NASA Astrophysics Data System (ADS)

    Horowitz, Seth S.; Simmons, Andrea M.; Blue, China

    2005-09-01

    Sound is arguably the most widely perceived and pervasive form of energy in our world, and among the least understood, in part due to the complexity of its underlying principles. A series of interactive displays has been developed which demonstrates that the nature of sound involves the propagation of energy through space, and illustrates the definition of psychoacoustics, which is how listeners map the physical aspects of sound and vibration onto their brains. These displays use auditory illusions and commonly experienced music and sound in novel presentations (using interactive computer algorithms) to show that what you hear is not always what you get. The areas covered in these demonstrations range from simple and complex auditory localization, which illustrate why humans are bad at echolocation but excellent at determining the contents of auditory space, to auditory illusions that manipulate fine phase information and make the listener think their head is changing size. Another demonstration shows how auditory and visual localization coincide and sound can be used to change visual tracking. These demonstrations are designed to run on a wide variety of student accessible platforms including web pages, stand-alone presentations, or even hardware-based systems for museum displays.

  20. Modality-specificity of Selective Attention Networks

    PubMed Central

    Stewart, Hannah J.; Amitay, Sygal

    2015-01-01

    Objective: To establish the modality specificity and generality of selective attention networks. Method: Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. Results: The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled “general attention.” The third component was labeled “auditory attention,” as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as “spatial orienting” and “spatial conflict,” respectively—they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task—all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). Conclusions: These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific. PMID:26635709

  1. The many facets of auditory display

    NASA Technical Reports Server (NTRS)

    Blattner, Meera M.

    1995-01-01

    In this presentation we will examine some of the ways sound can be used in a virtual world. We make the case that many different types of audio experience are available to us. A full range of audio experiences include: music, speech, real-world sounds, auditory displays, and auditory cues or messages. The technology of recreating real-world sounds through physical modeling has advanced in the past few years allowing better simulation of virtual worlds. Three-dimensional audio has further enriched our sensory experiences.

  2. A neural network model of ventriloquism effect and aftereffect.

    PubMed

    Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro

    2012-01-01

    Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  3. High sensitivity to multisensory conflicts in agoraphobia exhibited by virtual reality.

    PubMed

    Viaud-Delmon, Isabelle; Warusfel, Olivier; Seguelas, Angeline; Rio, Emmanuel; Jouvent, Roland

    2006-10-01

    The primary aim of this study was to evaluate the effect of auditory feedback in a VR system planned for clinical use and to address the different factors that should be taken into account in building a bimodal virtual environment (VE). We conducted an experiment in which we assessed spatial performances in agoraphobic patients and normal subjects comparing two kinds of VEs, visual alone (Vis) and auditory-visual (AVis), during separate sessions. Subjects were equipped with a head-mounted display coupled with an electromagnetic sensor system and immersed in a virtual town. Their task was to locate different landmarks and become familiar with the town. In the AVis condition subjects were equipped with the head-mounted display and headphones, which delivered a soundscape updated in real-time according to their movement in the virtual town. While general performances remained comparable across the conditions, the reported feeling of immersion was more compelling in the AVis environment. However, patients exhibited more cybersickness symptoms in this condition. The result of this study points to the multisensory integration deficit of agoraphobic patients and underline the need for further research on multimodal VR systems for clinical use.

  4. Psychophysical Evaluation of Three-Dimensional Auditory Displays

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L. (Principal Investigator)

    1995-01-01

    This report describes the process made during the first year of a three-year Cooperative Research Agreement (CRA NCC2-542). The CRA proposed a program of applied of psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years. we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners' head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on two of these topics, the role of head movements and the role of echoes and reflections, were reported in the most recent Semi-Annual Pro-ress Report (Appendix A). In the period since the last Progress Report we have been studying a third topic, the localizability of moving sources. The results of this research are described. The fidelity of a virtual auditory display is critically dependent on precise measurement of the listener''s Head-Related Transfer Functions (HRTFs), which are used to produce the virtual auditory images. We continue to explore methods for improving our HRTF measurement technique. During this reporting period we compared HRTFs measured using our standard open-canal probe tube technique and HRTFs measured with the closed-canal insert microphones from the Crystal River Engineering Snapshot system.

  5. Evidence for cue-independent spatial representation in the human auditory cortex during active listening.

    PubMed

    Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher

    2017-09-05

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.

  6. Evidence for cue-independent spatial representation in the human auditory cortex during active listening

    PubMed Central

    McLaughlin, Susan A.; Rinne, Teemu; Stecker, G. Christopher

    2017-01-01

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues—particularly interaural time and level differences (ITD and ILD)—that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and—critically—for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues. PMID:28827357

  7. The music of your emotions: neural substrates involved in detection of emotional correspondence between auditory and visual music actions.

    PubMed

    Petrini, Karin; Crabbe, Frances; Sheridan, Carol; Pollick, Frank E

    2011-04-29

    In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music), visual (musician's movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound) than for emotionally matching music performances (combining the musician's movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.

  8. Audition and exhibition to toluene - a contribution for the theme

    PubMed Central

    Augusto, Lívia Sanches Calvi; Kulay, Luiz Alexandre; Franco, Eloisa Sartori

    2012-01-01

    Summary Introduction: With the technological advances and the changes in the productive processes, the workers are displayed the different physical and chemical agents in its labor environment. The toluene is solvent an organic gift in glues, inks, oils, amongst others. Objective: To compare solvent the literary findings that evidence that diligent displayed simultaneously the noise and they have greater probability to develop an auditory loss of peripheral origin. Method: Revision of literature regarding the occupational auditory loss in displayed workers the noise and toluene. Results: The isolated exposition to the toluene also can unchain an alteration of the auditory thresholds. These audiometric findings, for ototoxicity the exposition to the toluene, present similar audiograms to the one for exposition to the noise, what it becomes difficult to differentiate a audiometric result of agreed exposition - noise and toluene - and exposition only to the noise. Conclusion: The majority of the studies was projected to generate hypotheses and would have to be considered as preliminary steps of an additional research. Until today the agents in the environment of work and its effect they have been studied in isolated way and the limits of tolerance of these, do not consider the agreed expositions. Considering that the workers are displayed the multiples agent and that the auditory loss is irreversible, the implemented tests must be more complete and all the workers must be part of the program of auditory prevention exactly displayed the low doses of the recommended limit of exposition. PMID:25991943

  9. Activity in Human Auditory Cortex Represents Spatial Separation Between Concurrent Sounds.

    PubMed

    Shiell, Martha M; Hausfeld, Lars; Formisano, Elia

    2018-05-23

    The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene. SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent. Copyright © 2018 the authors 0270-6474/18/384977-08$15.00/0.

  10. Binaural speech processing in individuals with auditory neuropathy.

    PubMed

    Rance, G; Ryan, M M; Carew, P; Corben, L A; Yiu, E; Tan, J; Delatycki, M B

    2012-12-13

    Auditory neuropathy disrupts the neural representation of sound and may therefore impair processes contingent upon inter-aural integration. The aims of this study were to investigate binaural auditory processing in individuals with axonal (Friedreich ataxia) and demyelinating (Charcot-Marie-Tooth disease type 1A) auditory neuropathy and to evaluate the relationship between the degree of auditory deficit and overall clinical severity in patients with neuropathic disorders. Twenty-three subjects with genetically confirmed Friedreich ataxia and 12 subjects with Charcot-Marie-Tooth disease type 1A underwent psychophysical evaluation of basic auditory processing (intensity discrimination/temporal resolution) and binaural speech perception assessment using the Listening in Spatialized Noise test. Age, gender and hearing-level-matched controls were also tested. Speech perception in noise for individuals with auditory neuropathy was abnormal for each listening condition, but was particularly affected in circumstances where binaural processing might have improved perception through spatial segregation. Ability to use spatial cues was correlated with temporal resolution suggesting that the binaural-processing deficit was the result of disordered representation of timing cues in the left and right auditory nerves. Spatial processing was also related to overall disease severity (as measured by the Friedreich Ataxia Rating Scale and Charcot-Marie-Tooth Neuropathy Score) suggesting that the degree of neural dysfunction in the auditory system accurately reflects generalized neuropathic changes. Measures of binaural speech processing show promise for application in the neurology clinic. In individuals with auditory neuropathy due to both axonal and demyelinating mechanisms the assessment provides a measure of functional hearing ability, a biomarker capable of tracking the natural history of progressive disease and a potential means of evaluating the effectiveness of interventions. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  11. Psychophysical evaluation of three-dimensional auditory displays

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.

    1991-01-01

    Work during this reporting period included the completion of our research on the use of principal components analysis (PCA) to model the acoustical head related transfer functions (HRTFs) that are used to synthesize virtual sources for three dimensional auditory displays. In addition, a series of studies was initiated on the perceptual errors made by listeners when localizing free-field and virtual sources. Previous research has revealed that under certain conditions these perceptual errors, often called 'confusions' or 'reversals', are both large and frequent, thus seriously comprising the utility of a 3-D virtual auditory display. The long-range goal of our work in this area is to elucidate the sources of the confusions and to develop signal-processing strategies to reduce or eliminate them.

  12. Educational Testing of an Auditory Display of Mars Gamma Ray Spectrometer Data

    NASA Astrophysics Data System (ADS)

    Keller, J. M.; Pompea, S. M.; Prather, E. E.; Slater, T. F.; Boynton, W. V.; Enos, H. L.; Quinn, M.

    2003-12-01

    A unique, alternative educational and public outreach product was created to investigate the use and effectiveness of auditory displays in science education. The product, which allows students to both visualize and hear seasonal variations in data detected by the Gamma Ray Spectrometer (GRS) aboard the Mars Odyssey spacecraft, consists of an animation of false-color maps of hydrogen concentrations on Mars along with a musical presentation, or sonification, of the same data. Learners can access this data using the visual false-color animation, the auditory false-pitch sonification, or both. Central to the development of this product is the question of its educational effectiveness and implementation. During the spring 2003 semester, three sections of an introductory astronomy course, each with ˜100 non-science undergraduates, were presented with one of three different exposures to GRS hydrogen data: one auditory, one visual, and one both auditory and visual. Student achievement data was collected through use of multiple-choice and open-ended surveys administered before, immediately following, and three and six weeks following the experiment. It was found that the three student groups performed equally well in their ability to perceive and interpret the data presented. Additionally, student groups exposed to the auditory display reported a higher interest and engagement level than the student group exposed to the visual data alone. Based upon this preliminary testing,we have made improvements to both the educational product and our evaluation protocol. This fall, we will conduct further testing with ˜100 additional students, half receiving auditory data and half receiving visual data, and we will conduct interviews with individual students as they interface with the auditory display. Through this process, we hope to further assess both learning and engagement gains associated with alternative and multi-modal representations of scientific data that extend beyond traditional visualization approaches. This work has been supported by the GRS Education and Public Outreach Program and the NASA Spacegrant Graduate Fellowship Program.

  13. Egocentric and allocentric representations in auditory cortex

    PubMed Central

    Brimijoin, W. Owen; Bizley, Jennifer K.

    2017-01-01

    A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796

  14. Plasticity of spatial hearing: behavioural effects of cortical inactivation

    PubMed Central

    Nodal, Fernando R; Bajo, Victoria M; King, Andrew J

    2012-01-01

    The contribution of auditory cortex to spatial information processing was explored behaviourally in adult ferrets by reversibly deactivating different cortical areas by subdural placement of a polymer that released the GABAA agonist muscimol over a period of weeks. The spatial extent and time course of cortical inactivation were determined electrophysiologically. Muscimol-Elvax was placed bilaterally over the anterior (AEG), middle (MEG) or posterior ectosylvian gyrus (PEG), so that different regions of the auditory cortex could be deactivated in different cases. Sound localization accuracy in the horizontal plane was assessed by measuring both the initial head orienting and approach-to-target responses made by the animals. Head orienting behaviour was unaffected by silencing any region of the auditory cortex, whereas the accuracy of approach-to-target responses to brief sounds (40 ms noise bursts) was reduced by muscimol-Elvax but not by drug-free implants. Modest but significant localization impairments were observed after deactivating the MEG, AEG or PEG, although the largest deficits were produced in animals in which the MEG, where the primary auditory fields are located, was silenced. We also examined experience-induced spatial plasticity by reversibly plugging one ear. In control animals, localization accuracy for both approach-to-target and head orienting responses was initially impaired by monaural occlusion, but recovered with training over the next few days. Deactivating any part of the auditory cortex resulted in less complete recovery than in controls, with the largest deficits observed after silencing the higher-level cortical areas in the AEG and PEG. Although suggesting that each region of auditory cortex contributes to spatial learning, differences in the localization deficits and degree of adaptation between groups imply a regional specialization in the processing of spatial information across the auditory cortex. PMID:22547635

  15. Auditory peripersonal space in humans.

    PubMed

    Farnè, Alessandro; Làdavas, Elisabetta

    2002-10-01

    In the present study we report neuropsychological evidence of the existence of an auditory peripersonal space representation around the head in humans and its characteristics. In a group of right brain-damaged patients with tactile extinction, we found that a sound delivered near the ipsilesional side of the head (20 cm) strongly extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). By contrast, when an auditory stimulus was presented far from the head (70 cm), cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was most consistently found (i.e., both in the front and back spaces) when a complex sound was presented, like a white noise burst. Pure tones produced spatially specific cross-modal extinction when presented in the back space, but not in the front space. In addition, the most severe cross-modal extinction emerged when sounds came from behind the head, thus showing that the back space is more sensitive than the front space to the sensory interaction of auditory-tactile inputs. Finally, when cross-modal effects were investigated by reversing the spatial arrangement of cross-modal stimuli (i.e., touch on the right and sound on the left), we found that an ipsilesional tactile stimulus, although inducing a small amount of cross-modal tactile-auditory extinction, did not produce any spatial-specific effect. Therefore, the selective aspects of cross-modal interaction found near the head cannot be explained by a competition between a damaged left spatial representation and an intact right spatial representation. Thus, consistent with neurophysiological evidence from monkeys, our findings strongly support the existence, in humans, of an integrated cross-modal system coding auditory and tactile stimuli near the body, that is, in the peripersonal space.

  16. Evaluation of Domain-Specific Collaboration Interfaces for Team Command and Control Tasks

    DTIC Science & Technology

    2012-05-01

    Technologies 1.1.1. Virtual Whiteboard Cognitive theories relating the utilization, storage, and retrieval of verbal and spatial information, such as...AE Spatial emergent SE Auditory linguistic AL Spatial positional SP Facial figural FF Spatial quantitative SQ Facial motive FM Tactile figural...driven by the auditory linguistic (AL), short-term memory (STM), spatial attentive (SA), visual temporal (VT), and vocal process (V) subscales. 0

  17. An fMRI Study of the Neural Systems Involved in Visually Cued Auditory Top-Down Spatial and Temporal Attention

    PubMed Central

    Li, Chunlin; Chen, Kewei; Han, Hongbin; Chui, Dehua; Wu, Jinglong

    2012-01-01

    Top-down attention to spatial and temporal cues has been thoroughly studied in the visual domain. However, because the neural systems that are important for auditory top-down temporal attention (i.e., attention based on time interval cues) remain undefined, the differences in brain activity between directed attention to auditory spatial location (compared with time intervals) are unclear. Using fMRI (magnetic resonance imaging), we measured the activations caused by cue-target paradigms by inducing the visual cueing of attention to an auditory target within a spatial or temporal domain. Imaging results showed that the dorsal frontoparietal network (dFPN), which consists of the bilateral intraparietal sulcus and the frontal eye field, responded to spatial orienting of attention, but activity was absent in the bilateral frontal eye field (FEF) during temporal orienting of attention. Furthermore, the fMRI results indicated that activity in the right ventrolateral prefrontal cortex (VLPFC) was significantly stronger during spatial orienting of attention than during temporal orienting of attention, while the DLPFC showed no significant differences between the two processes. We conclude that the bilateral dFPN and the right VLPFC contribute to auditory spatial orienting of attention. Furthermore, specific activations related to temporal cognition were confirmed within the superior occipital gyrus, tegmentum, motor area, thalamus and putamen. PMID:23166800

  18. The impact of variation in low-frequency interaural cross correlation on auditory spatial imagery in stereophonic loudspeaker reproduction

    NASA Astrophysics Data System (ADS)

    Martens, William

    2005-04-01

    Several attributes of auditory spatial imagery associated with stereophonic sound reproduction are strongly modulated by variation in interaural cross correlation (IACC) within low frequency bands. Nonetheless, a standard practice in bass management for two-channel and multichannel loudspeaker reproduction is to mix low-frequency musical content to a single channel for reproduction via a single driver (e.g., a subwoofer). This paper reviews the results of psychoacoustic studies which support the conclusion that reproduction via multiple drivers of decorrelated low-frequency signals significantly affects such important spatial attributes as auditory source width (ASW), auditory source distance (ASD), and listener envelopment (LEV). A variety of methods have been employed in these tests, including forced choice discrimination and identification, and direct ratings of both global dissimilarity and distinct attributes. Contrary to assumptions that underlie industrial standards established in 1994 by ITU-R. Recommendation BS.775-1, these findings imply that substantial stereophonic spatial information exists within audio signals at frequencies below the 80 to 120 Hz range of prescribed subwoofer cutoff frequencies, and that loudspeaker reproduction of decorrelated signals at frequencies as low as 50 Hz can have an impact upon auditory spatial imagery. [Work supported by VRQ.

  19. Tracking the voluntary control of auditory spatial attention with event-related brain potentials.

    PubMed

    Störmer, Viola S; Green, Jessica J; McDonald, John J

    2009-03-01

    A lateralized event-related potential (ERP) component elicited by attention-directing cues (ADAN) has been linked to frontal-lobe control but is often absent when spatial attention is deployed in the auditory modality. Here, we tested the hypothesis that ERP activity associated with frontal-lobe control of auditory spatial attention is distributed bilaterally by comparing ERPs elicited by attention-directing cues and neutral cues in a unimodal auditory task. This revealed an initial ERP positivity over the anterior scalp and a later ERP negativity over the parietal scalp. Distributed source analysis indicated that the anterior positivity was generated primarily in bilateral prefrontal cortices, whereas the more posterior negativity was generated in parietal and temporal cortices. The anterior ERP positivity likely reflects frontal-lobe attentional control, whereas the subsequent ERP negativity likely reflects anticipatory biasing of activity in auditory cortex.

  20. Visual influences on auditory spatial learning

    PubMed Central

    King, Andrew J.

    2008-01-01

    The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967

  1. Emergence of Spatial Stream Segregation in the Ascending Auditory Pathway.

    PubMed

    Yao, Justin D; Bremen, Peter; Middlebrooks, John C

    2015-12-09

    Stream segregation enables a listener to disentangle multiple competing sequences of sounds. A recent study from our laboratory demonstrated that cortical neurons in anesthetized cats exhibit spatial stream segregation (SSS) by synchronizing preferentially to one of two sequences of noise bursts that alternate between two source locations. Here, we examine the emergence of SSS along the ascending auditory pathway. Extracellular recordings were made in anesthetized rats from the inferior colliculus (IC), the nucleus of the brachium of the IC (BIN), the medial geniculate body (MGB), and the primary auditory cortex (A1). Stimuli consisted of interleaved sequences of broadband noise bursts that alternated between two source locations. At stimulus presentation rates of 5 and 10 bursts per second, at which human listeners report robust SSS, neural SSS is weak in the central nucleus of the IC (ICC), it appears in the nucleus of the brachium of the IC (BIN) and in approximately two-thirds of neurons in the ventral MGB (MGBv), and is prominent throughout A1. The enhancement of SSS at the cortical level reflects both increased spatial sensitivity and increased forward suppression. We demonstrate that forward suppression in A1 does not result from synaptic inhibition at the cortical level. Instead, forward suppression might reflect synaptic depression in the thalamocortical projection. Together, our findings indicate that auditory streams are increasingly segregated along the ascending auditory pathway as distinct mutually synchronized neural populations. Listeners are capable of disentangling multiple competing sequences of sounds that originate from distinct sources. This stream segregation is aided by differences in spatial location between the sources. A possible substrate of spatial stream segregation (SSS) has been described in the auditory cortex, but the mechanisms leading to those cortical responses are unknown. Here, we investigated SSS in three levels of the ascending auditory pathway with extracellular unit recordings in anesthetized rats. We found that neural SSS emerges within the ascending auditory pathway as a consequence of sharpening of spatial sensitivity and increasing forward suppression. Our results highlight brainstem mechanisms that culminate in SSS at the level of the auditory cortex. Copyright © 2015 Yao et al.

  2. Acquisition of L2 Japanese Geminates: Training with Waveform Displays

    ERIC Educational Resources Information Center

    Motohashi-Saigo, Miki; Hardison, Debra M.

    2009-01-01

    The value of waveform displays as visual feedback was explored in a training study involving perception and production of L2 Japanese by beginning-level L1 English learners. A pretest-posttest design compared auditory-visual (AV) and auditory-only (A-only) Web-based training. Stimuli were singleton and geminate /t,k,s/ followed by /a,u/ in two…

  3. Using Auditory Cues to Perceptually Extract Visual Data in Collaborative, Immersive Big-Data Display Systems

    NASA Astrophysics Data System (ADS)

    Lee, Wendy

    The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer's CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.

  4. Psychophysics and Neuronal Bases of Sound Localization in Humans

    PubMed Central

    Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.

    2013-01-01

    Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698

  5. Spatial processing in the auditory cortex of the macaque monkey

    NASA Astrophysics Data System (ADS)

    Recanzone, Gregg H.

    2000-10-01

    The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.

  6. Spatial gradient for unique-feature detection in patients with unilateral neglect: evidence from auditory and visual search.

    PubMed

    Eramudugolla, Ranmalee; Mattingley, Jason B

    2008-01-01

    Patients with unilateral spatial neglect following right hemisphere damage are impaired in detecting contralesional targets in both visual and haptic search tasks, and often show a graded improvement in detection performance for more ipsilesional spatial locations. In audition, multiple simultaneous sounds are most effectively perceived if they are distributed along the frequency dimension. Thus, attention to spectro-temporal features alone can allow detection of a target sound amongst multiple simultaneous distracter sounds, regardless of whether these sounds are spatially separated. Spatial bias in attention associated with neglect should not affect auditory search based on spectro-temporal features of a sound target. We report that a right brain damaged patient with neglect demonstrated a significant gradient favouring the ipsilesional side on a visual search task as well as an auditory search task in which the target was a frequency modulated tone amongst steady distractor tones. No such asymmetry was apparent in the auditory search performance of a control patient with a right hemisphere lesion but no neglect. The results suggest that the spatial bias in attention exhibited by neglect patients affects stimulus processing even when spatial information is irrelevant to the task.

  7. Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention

    PubMed Central

    Noppeney, Uta

    2018-01-01

    Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567

  8. Auditory spatial attention to speech and complex non-speech sounds in children with autism spectrum disorder.

    PubMed

    Soskey, Laura N; Allen, Paul D; Bennetto, Loisa

    2017-08-01

    One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  9. Short-term memory stores organized by information domain.

    PubMed

    Noyce, Abigail L; Cestero, Nishmar; Shinn-Cunningham, Barbara G; Somers, David C

    2016-04-01

    Vision and audition have complementary affinities, with vision excelling in spatial resolution and audition excelling in temporal resolution. Here, we investigated the relationships among the visual and auditory modalities and spatial and temporal short-term memory (STM) using change detection tasks. We created short sequences of visual or auditory items, such that each item within a sequence arose at a unique spatial location at a unique time. On each trial, two successive sequences were presented; subjects attended to either space (the sequence of locations) or time (the sequence of inter item intervals) and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal trials (both sequences presented in the same modality) and crossmodal trials (Sequence 1 visual, Sequence 2 auditory, or vice versa) for both spatial and temporal tasks. We found a strong interaction between modality and task: Spatial performance was best on unimodal visual trials, whereas temporal performance was best on unimodal auditory trials. The order of modalities on crossmodal trials also mattered, suggesting that perceptual fidelity at encoding is critical to STM. Critically, no cost was attributable to crossmodal comparison: In both tasks, performance on crossmodal trials was as good as or better than on the weaker unimodal trials. STM representations of space and time can guide change detection in either the visual or the auditory modality, suggesting that the temporal or spatial organization of STM may supersede sensory-specific organization.

  10. Multimodal information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Displays

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Bittner, Rachel M.; Anderson, Mark R.

    2012-01-01

    Auditory communication displays within the NextGen data link system may use multiple synthetic speech messages replacing traditional ATC and company communications. The design of an interface for selecting amongst multiple incoming messages can impact both performance (time to select, audit and release a message) and preference. Two design factors were evaluated: physical pressure-sensitive switches versus flat panel "virtual switches", and the presence or absence of auditory feedback from switch contact. Performance with stimuli using physical switches was 1.2 s faster than virtual switches (2.0 s vs. 3.2 s); auditory feedback provided a 0.54 s performance advantage (2.33 s vs. 2.87 s). There was no interaction between these variables. Preference data were highly correlated with performance.

  11. On the spatial specificity of audiovisual crossmodal exogenous cuing effects.

    PubMed

    Lee, Jae; Spence, Charles

    2017-06-01

    It is generally-accepted that the presentation of an auditory cue will direct an observer's spatial attention to the region of space from where it originates and therefore facilitate responses to visual targets presented there rather than from a different position within the cued hemifield. However, to date, there has been surprisingly limited evidence published in support of such within-hemifield crossmodal exogenous spatial cuing effects. Here, we report two experiments designed to investigate within- and between-hemifield spatial cuing effects in the case of audiovisual exogenous covert orienting. Auditory cues were presented from one of four frontal loudspeakers (two on either side of central fixation). There were eight possible visual target locations (one above and another below each of the loudspeakers). The auditory cues were evenly separated laterally by 30° in Experiment 1, and by 10° in Experiment 2. The potential cue and target locations were separated vertically by approximately 19° in Experiment 1, and by 4° in Experiment 2. On each trial, the participants made a speeded elevation (i.e., up vs. down) discrimination response to the visual target following the presentation of a spatially-nonpredictive auditory cue. Within-hemifield spatial cuing effects were observed only when the auditory cues were presented from the inner locations. Between-hemifield spatial cuing effects were observed in both experiments. Taken together, these results demonstrate that crossmodal exogenous shifts of spatial attention depend on the eccentricity of both the cue and target in a way that has not been made explicit by previous research. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    NASA Astrophysics Data System (ADS)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.

  13. P50 Suppression in Children with Selective Mutism: A Preliminary Report

    ERIC Educational Resources Information Center

    Henkin, Yael; Feinholz, Maya; Arie, Miri; Bar-Haim, Yair

    2010-01-01

    Evidence suggests that children with selective mutism (SM) display significant aberrations in auditory efferent activity at the brainstem level that may underlie inefficient auditory processing during vocalization, and lead to speech avoidance. The objective of the present study was to explore auditory filtering processes at the cortical level in…

  14. Effects of Secondary Task Modality and Processing Code on Automation Trust and Utilization During Simulated Airline Luggage Screening

    NASA Technical Reports Server (NTRS)

    Phillips, Rachel; Madhavan, Poornima

    2010-01-01

    The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence interaction with automation via significant effects on trust and system utilization. These findings have implications for both automation design and operator training.

  15. Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients.

    PubMed

    Golob, Edward J; Winston, Jenna; Mock, Jeffrey R

    2017-01-01

    Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory.

  16. Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients

    PubMed Central

    Golob, Edward J.; Winston, Jenna; Mock, Jeffrey R.

    2017-01-01

    Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory. PMID:29218024

  17. Visually induced plasticity of auditory spatial perception in macaques.

    PubMed

    Woods, Timothy M; Recanzone, Gregg H

    2004-09-07

    When experiencing spatially disparate visual and auditory stimuli, a common percept is that the sound originates from the location of the visual stimulus, an illusion known as the ventriloquism effect. This illusion can persist for tens of minutes, a phenomenon termed the ventriloquism aftereffect. The underlying neuronal mechanisms of this rapidly induced plasticity remain unclear; indeed, it remains untested whether similar multimodal interactions occur in other species. We therefore tested whether macaque monkeys experience the ventriloquism aftereffect similar to the way humans do. The ability of two monkeys to determine which side of the midline a sound was presented from was tested before and after a period of 20-60 min in which the monkeys experienced either spatially identical or spatially disparate auditory and visual stimuli. In agreement with human studies, the monkeys did experience a shift in their auditory spatial perception in the direction of the spatially disparate visual stimulus, and the aftereffect did not transfer across sounds that differed in frequency by two octaves. These results show that macaque monkeys experience the ventriloquism aftereffect similar to the way humans do in all tested respects, indicating that these multimodal interactions are a basic phenomenon of the central nervous system.

  18. Brain correlates of the orientation of auditory spatial attention onto speaker location in a "cocktail-party" situation.

    PubMed

    Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan

    2016-10-01

    Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.

  19. Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex

    PubMed Central

    Romanski, L. M.; Tian, B.; Fritz, J.; Mishkin, M.; Goldman-Rakic, P. S.; Rauschecker, J. P.

    2009-01-01

    ‘What’ and ‘where’ visual streams define ventrolateral object and dorsolateral spatial processing domains in the prefrontal cortex of nonhuman primates. We looked for similar streams for auditory–prefrontal connections in rhesus macaques by combining microelectrode recording with anatomical tract-tracing. Injection of multiple tracers into physiologically mapped regions AL, ML and CL of the auditory belt cortex revealed that anterior belt cortex was reciprocally connected with the frontal pole (area 10), rostral principal sulcus (area 46) and ventral prefrontal regions (areas 12 and 45), whereas the caudal belt was mainly connected with the caudal principal sulcus (area 46) and frontal eye fields (area 8a). Thus separate auditory streams originate in caudal and rostral auditory cortex and target spatial and non-spatial domains of the frontal lobe, respectively. PMID:10570492

  20. Interdependent encoding of pitch, timbre and spatial location in auditory cortex

    PubMed Central

    Bizley, Jennifer K.; Walker, Kerry M. M.; Silverman, Bernard W.; King, Andrew J.; Schnupp, Jan W. H.

    2009-01-01

    Because we can perceive the pitch, timbre and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from non-spatial attributes. Indeed, recent studies support the existence of anatomically segregated ‘what’ and ‘where’ cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and non-spatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Whilst indicating that neural encoding of pitch, location and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and non-spatial cues at higher cortical levels. Some units exhibited significant non-linear interactions between particular combinations of pitch, timbre and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and non-spatial attributes. Such non-linearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects. PMID:19228960

  1. [Identification of auditory laterality by means of a new dichotic digit test in Spanish, and body laterality and spatial orientation in children with dyslexia and in controls].

    PubMed

    Olivares-García, M R; Peñaloza-López, Y R; García-Pedroza, F; Jesús-Pérez, S; Uribe-Escamilla, R; Jiménez-de la Sancha, S

    In this study, a new dichotic digit test in Spanish (NDDTS) was applied in order to identify auditory laterality. We also evaluated body laterality and spatial location using the Subirana test. Both the dichotic test and the Subirana test for body laterality and spatial location were applied in a group of 40 children with dyslexia and in a control group made up of 40 children who were paired according to age and gender. The results of the three evaluations were analysed using the SPSS 10 software application, with Pearson's chi-squared test. It was seen that 42.5% of the children in the group of dyslexics had mixed auditory laterality, compared to 7.5% in the control group (p < or = 0.05). Body laterality was mixed in 25% of dyslexic children and in 2.5% in the control group (p < or = 0.05) and there was 72.5% spatial disorientation in the group of dyslexics, whereas only 15% (p < or = 0.05) was found in the control group. The NDDTS proved to be a useful tool for demonstrating that mixed auditory laterality and auditory predominance of the left ear are linked to dyslexia. The results of this test exceed those obtained for body laterality. Spatial orientation is indeed altered in children with dyslexia. The importance of this finding makes it necessary to study the central auditory processes in all cases in order to define better rehabilitation strategies in Spanish-speaking children.

  2. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. A Systematic Desensitization Paradigm to Treat Hypersensitivity to Auditory Stimuli in Children with Autism in Family Contexts

    ERIC Educational Resources Information Center

    Koegel, Robert L.; Openden, Daniel; Koegel, Lynn Kern

    2004-01-01

    Many children with autism display reactions to auditory stimuli that seem as if the stimuli were painful or otherwise extremely aversive. This article describes, within the contexts of three experimental designs, how procedures of systematic desensitization can be used to treat hypersensitivity to auditory stimuli in three young children with…

  4. How far away is plug 'n' play? Assessing the near-term potential of sonification and auditory display

    NASA Technical Reports Server (NTRS)

    Bargar, Robin

    1995-01-01

    The commercial music industry offers a broad range of plug 'n' play hardware and software scaled to music professionals and scaled to a broad consumer market. The principles of sound synthesis utilized in these products are relevant to application in virtual environments (VE). However, the closed architectures used in commercial music synthesizers are prohibitive to low-level control during real-time rendering, and the algorithms and sounds themselves are not standardized from product to product. To bring sound into VE requires a new generation of open architectures designed for human-controlled performance from interfaces embedded in immersive environments. This presentation addresses the state of the sonic arts in scientific computing and VE, analyzes research challenges facing sound computation, and offers suggestions regarding tools we might expect to become available during the next few years. A list of classes of audio functionality in VE includes sonification -- the use of sound to represent data from numerical models; 3D auditory display (spatialization and localization, also called externalization); navigation cues for positional orientation and for finding items or regions inside large spaces; voice recognition for controlling the computer; external communications between users in different spaces; and feedback to the user concerning his own actions or the state of the application interface. To effectively convey this considerable variety of signals, we apply principles of acoustic design to ensure the messages are neither confusing nor competing. We approach the design of auditory experience through a comprehensive structure for messages, and message interplay we refer to as an Automated Sound Environment. Our research addresses real-time sound synthesis, real-time signal processing and localization, interactive control of high-dimensional systems, and synchronization of sound and graphics.

  5. "Where do auditory hallucinations come from?"--a brain morphometry study of schizophrenia patients with inner or outer space hallucinations.

    PubMed

    Plaze, Marion; Paillère-Martinot, Marie-Laure; Penttilä, Jani; Januel, Dominique; de Beaurepaire, Renaud; Bellivier, Franck; Andoh, Jamila; Galinowski, André; Gallarda, Thierry; Artiges, Eric; Olié, Jean-Pierre; Mangin, Jean-François; Martinot, Jean-Luc; Cachia, Arnaud

    2011-01-01

    Auditory verbal hallucinations are a cardinal symptom of schizophrenia. Bleuler and Kraepelin distinguished 2 main classes of hallucinations: hallucinations heard outside the head (outer space, or external, hallucinations) and hallucinations heard inside the head (inner space, or internal, hallucinations). This distinction has been confirmed by recent phenomenological studies that identified 3 independent dimensions in auditory hallucinations: language complexity, self-other misattribution, and spatial location. Brain imaging studies in schizophrenia patients with auditory hallucinations have already investigated language complexity and self-other misattribution, but the neural substrate of hallucination spatial location remains unknown. Magnetic resonance images of 45 right-handed patients with schizophrenia and persistent auditory hallucinations and 20 healthy right-handed subjects were acquired. Two homogeneous subgroups of patients were defined based on the hallucination spatial location: patients with only outer space hallucinations (N=12) and patients with only inner space hallucinations (N=15). Between-group differences were then assessed using 2 complementary brain morphometry approaches: voxel-based morphometry and sulcus-based morphometry. Convergent anatomical differences were detected between the patient subgroups in the right temporoparietal junction (rTPJ). In comparison to healthy subjects, opposite deviations in white matter volumes and sulcus displacements were found in patients with inner space hallucination and patients with outer space hallucination. The current results indicate that spatial location of auditory hallucinations is associated with the rTPJ anatomy, a key region of the "where" auditory pathway. The detected tilt in the sulcal junction suggests deviations during early brain maturation, when the superior temporal sulcus and its anterior terminal branch appear and merge.

  6. Multisensory object perception in infancy: 4-month-olds perceive a mistuned harmonic as a separate auditory and visual object

    PubMed Central

    A. Smith, Nicholas; A. Folland, Nicholas; Martinez, Diana M.; Trainor, Laurel J.

    2017-01-01

    Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain et al., 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. PMID:28346869

  7. A Systematic Review of Mapping Strategies for the Sonification of Physical Quantities

    PubMed Central

    Dubus, Gaël; Bresin, Roberto

    2013-01-01

    The field of sonification has progressed greatly over the past twenty years and currently constitutes an established area of research. This article aims at exploiting and organizing the knowledge accumulated in previous experimental studies to build a foundation for future sonification works. A systematic review of these studies may reveal trends in sonification design, and therefore support the development of design guidelines. To this end, we have reviewed and analyzed 179 scientific publications related to sonification of physical quantities. Using a bottom-up approach, we set up a list of conceptual dimensions belonging to both physical and auditory domains. Mappings used in the reviewed works were identified, forming a database of 495 entries. Frequency of use was analyzed among these conceptual dimensions as well as higher-level categories. Results confirm two hypotheses formulated in a preliminary study: pitch is by far the most used auditory dimension in sonification applications, and spatial auditory dimensions are almost exclusively used to sonify kinematic quantities. To detect successful as well as unsuccessful sonification strategies, assessment of mapping efficiency conducted in the reviewed works was considered. Results show that a proper evaluation of sonification mappings is performed only in a marginal proportion of publications. Additional aspects of the publication database were investigated: historical distribution of sonification works is presented, projects are classified according to their primary function, and the sonic material used in the auditory display is discussed. Finally, a mapping-based approach for characterizing sonification is proposed. PMID:24358192

  8. Sustained Perceptual Deficits from Transient Sensory Deprivation

    PubMed Central

    Sanes, Dan H.

    2015-01-01

    Sensory pathways display heightened plasticity during development, yet the perceptual consequences of early experience are generally assessed in adulthood. This approach does not allow one to identify transient perceptual changes that may be linked to the central plasticity observed in juvenile animals. Here, we determined whether a brief period of bilateral auditory deprivation affects sound perception in developing and adult gerbils. Animals were reared with bilateral earplugs, either from postnatal day 11 (P11) to postnatal day 23 (P23) (a manipulation previously found to disrupt gerbil cortical properties), or from P23-P35. Fifteen days after earplug removal and restoration of normal thresholds, animals were tested on their ability to detect the presence of amplitude modulation (AM), a temporal cue that supports vocal communication. Animals reared with earplugs from P11-P23 displayed elevated AM detection thresholds, compared with age-matched controls. In contrast, an identical period of earplug rearing at a later age (P23-P35) did not impair auditory perception. Although the AM thresholds of earplug-reared juveniles improved during a week of repeated testing, a subset of juveniles continued to display a perceptual deficit. Furthermore, although the perceptual deficits induced by transient earplug rearing had resolved for most animals by adulthood, a subset of adults displayed impaired performance. Control experiments indicated that earplugging did not disrupt the integrity of the auditory periphery. Together, our results suggest that P11-P23 encompasses a critical period during which sensory deprivation disrupts central mechanisms that support auditory perceptual skills. SIGNIFICANCE STATEMENT Sensory systems are particularly malleable during development. This heightened degree of plasticity is beneficial because it enables the acquisition of complex skills, such as music or language. However, this plasticity comes with a cost: nervous system development displays an increased vulnerability to the sensory environment. Here, we identify a precise developmental window during which mild hearing loss affects the maturation of an auditory perceptual cue that is known to support animal communication, including human speech. Furthermore, animals reared with transient hearing loss display deficits in perceptual learning. Our results suggest that speech and language delays associated with transient or permanent childhood hearing loss may be accounted for, in part, by deficits in central auditory processing mechanisms. PMID:26224865

  9. Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity.

    PubMed

    Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou

    2018-01-01

    Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.

  10. Spatial Cues Provided by Sound Improve Postural Stabilization: Evidence of a Spatial Auditory Map?

    PubMed Central

    Gandemer, Lennie; Parseihian, Gaetan; Kronland-Martinet, Richard; Bourdin, Christophe

    2017-01-01

    It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize. PMID:28694770

  11. Evidence for multisensory spatial-to-motor transformations in aiming movements of children.

    PubMed

    King, Bradley R; Kagerer, Florian A; Contreras-Vidal, Jose L; Clark, Jane E

    2009-01-01

    The extant developmental literature investigating age-related differences in the execution of aiming movements has predominantly focused on visuomotor coordination, despite the fact that additional sensory modalities, such as audition and somatosensation, may contribute to motor planning, execution, and learning. The current study investigated the execution of aiming movements toward both visual and acoustic stimuli. In addition, we examined the interaction between visuomotor and auditory-motor coordination as 5- to 10-yr-old participants executed aiming movements to visual and acoustic stimuli before and after exposure to a visuomotor rotation. Children in all age groups demonstrated significant improvement in performance under the visuomotor perturbation, as indicated by decreased initial directional and root mean squared errors. Moreover, children in all age groups demonstrated significant visual aftereffects during the postexposure phase, suggesting a successful update of their spatial-to-motor transformations. Interestingly, these updated spatial-to-motor transformations also influenced auditory-motor performance, as indicated by distorted movement trajectories during the auditory postexposure phase. The distorted trajectories were present during auditory postexposure even though the auditory-motor relationship was not manipulated. Results suggest that by the age of 5 yr, children have developed a multisensory spatial-to-motor transformation for the execution of aiming movements toward both visual and acoustic targets.

  12. Application of auditory signals to the operation of an agricultural vehicle: results of pilot testing.

    PubMed

    Karimi, D; Mondor, T A; Mann, D D

    2008-01-01

    The operation of agricultural vehicles is a multitask activity that requires proper distribution of attentional resources. Human factors theories suggest that proper utilization of the operator's sensory capacities under such conditions can improve the operator's performance and reduce the operator's workload. Using a tractor driving simulator, this study investigated whether auditory cues can be used to improve performance of the operator of an agricultural vehicle. Steering of a vehicle was simulated in visual mode (where driving error was shown to the subject using a lightbar) and in auditory mode (where a pair of speakers were used to convey the driving error direction and/or magnitude). A secondary task was also introduced in order to simulate the monitoring of an attached machine. This task included monitoring of two identical displays, which were placed behind the simulator, and responding to them, when needed, using a joystick. This task was also implemented in auditory mode (in which a beep signaled the subject to push the proper button when a response was needed) and in visual mode (in which there was no beep and visual, monitoring of the displays was necessary). Two levels of difficulty of the monitoring task were used. Deviation of the simulated vehicle from a desired straight line was used as the measure of performance in the steering task, and reaction time to the displays was used as the measure of performance in the monitoring task. Results of the experiments showed that steering performance was significantly better when steering was a visual task (driving errors were 40% to 60% of the driving errors in auditory mode), although subjective evaluations showed that auditory steering could be easier, depending on the implementation. Performance in the monitoring task was significantly better for auditory implementation (reaction time was approximately 6 times shorter), and this result was strongly supported by subjective ratings. The majority of the subjects preferred the combination of visual mode for the steering task and auditory mode for the monitoring task.

  13. Supramodal Enhancement of Auditory Perceptual and Cognitive Learning by Video Game Playing.

    PubMed

    Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R; Amitay, Sygal

    2017-01-01

    Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.

  14. The head-centered meridian effect: auditory attention orienting in conditions of impaired visuo-spatial information.

    PubMed

    Olivetti Belardinelli, Marta; Santangelo, Valerio

    2005-07-08

    This paper examines the characteristics of spatial attention orienting in situations of visual impairment. Two groups of subjects, respectively schizophrenic and blind, with different degrees of visual spatial information impairment, were tested. In Experiment 1, the schizophrenic subjects were instructed to detect an auditory target, which was preceded by a visual cue. The cue could appear in the same location as the target, separated from it respectively by the vertical visual meridian (VM), the vertical head-centered meridian (HCM) or another meridian. Similarly to normal subjects tested with the same paradigm (Ferlazzo, Couyoumdjian, Padovani, and Olivetti Belardinelli, 2002), schizophrenic subjects showed slower reactions times (RTs) when cued, and when the target locations were on the opposite sides of the HCM. This HCM effect strengthens the assumption that different auditory and visual spatial maps underlie the representation of attention orienting mechanisms. In Experiment 2, blind subjects were asked to detect an auditory target, which had been preceded by an auditory cue, while staring at an imaginary point. The point was located either to the left or to the right, in order to control for ocular movements and maintain the dissociation between the HCM and the VM. Differences between crossing and no-crossing conditions of HCM were not found. Therefore it is possible to consider the HCM effect as a consequence of the interaction between visual and auditory modalities. Related theoretical issues are also discussed.

  15. Effects of laterality and pitch height of an auditory accessory stimulus on horizontal response selection: the Simon effect and the SMARC effect.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2009-08-01

    In the present article, we investigated the effects of pitch height and the presented ear (laterality) of an auditory stimulus, irrelevant to the ongoing visual task, on horizontal response selection. Performance was better when the response and the stimulated ear spatially corresponded (Simon effect), and when the spatial-musical association of response codes (SMARC) correspondence was maintained-that is, right (left) response with a high-pitched (low-pitched) tone. These findings reveal an automatic activation of spatially and musically associated responses by task-irrelevant auditory accessory stimuli. Pitch height is strong enough to influence the horizontal responses despite modality differences with task target.

  16. Modelling of human low frequency sound localization acuity demonstrates dominance of spatial variation of interaural time difference and suggests uniform just-noticeable differences in interaural time difference.

    PubMed

    Smith, Rosanna C G; Price, Stephen R

    2014-01-01

    Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.

  17. Potential for using visual, auditory, and olfactory cues to manage foraging behaviour and spatial distribution of rangeland livestock

    USDA-ARS?s Scientific Manuscript database

    This paper reviews the literature and reports on the current state of knowledge regarding the potential for managers to use visual (VC), auditory (AC), and olfactory (OC) cues to manage foraging behavior and spatial distribution of rangeland livestock. We present evidence that free-ranging livestock...

  18. Display technology - Human factors concepts

    NASA Astrophysics Data System (ADS)

    Stokes, Alan; Wickens, Christopher; Kite, Kirsten

    1990-03-01

    Recent advances in the design of aircraft cockpit displays are reviewed, with an emphasis on their applicability to automobiles. The fundamental principles of display technology are introduced, and individual chapters are devoted to selective visual attention, command and status displays, foveal and peripheral displays, navigational displays, auditory displays, color and pictorial displays, head-up displays, automated systems, and dual-task performance and pilot workload. Diagrams, drawings, and photographs of typical displays are provided.

  19. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    PubMed Central

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  20. [Functional anatomy of the cochlear nerve and the central auditory system].

    PubMed

    Simon, E; Perrot, X; Mertens, P

    2009-04-01

    The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.

  1. Serial and Parallel Processing in the Primate Auditory Cortex Revisited

    PubMed Central

    Recanzone, Gregg H.; Cohen, Yale E.

    2009-01-01

    Over a decade ago it was proposed that the primate auditory cortex is organized in a serial and parallel manner in which there is a dorsal stream processing spatial information and a ventral stream processing non-spatial information. This organization is similar to the “what”/“where” processing of the primate visual cortex. This review will examine several key studies, primarily electrophysiological, that have tested this hypothesis. We also review several human imaging studies that have attempted to define these processing streams in the human auditory cortex. While there is good evidence that spatial information is processed along a particular series of cortical areas, the support for a non-spatial processing stream is not as strong. Why this should be the case and how to better test this hypothesis is also discussed. PMID:19686779

  2. Use of sonification in the detection of anomalous events

    NASA Astrophysics Data System (ADS)

    Ballora, Mark; Cole, Robert J.; Kruesi, Heidi; Greene, Herbert; Monahan, Ganesh; Hall, David L.

    2012-06-01

    In this paper, we describe the construction of a soundtrack that fuses stock market data with information taken from tweets. This soundtrack, or auditory display, presents the numerical and text data in such a way that anomalous events may be readily detected, even by untrained listeners. The soundtrack generation is flexible, allowing an individual listener to create a unique audio mix from the available information sources. Properly constructed, the display exploits the auditory system's sensitivities to periodicities, to dynamic changes, and to patterns. This type of display could be valuable in environments that demand high levels of situational awareness based on multiple sources of incoming information.

  3. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  4. A selective impairment of perception of sound motion direction in peripheral space: A case study.

    PubMed

    Thaler, Lore; Paciocco, Joseph; Daley, Mark; Lesniak, Gabriella D; Purcell, David W; Fraser, J Alexander; Dutton, Gordon N; Rossit, Stephanie; Goodale, Melvyn A; Culham, Jody C

    2016-01-08

    It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Human Engineering Data Base for Design and Selection of Cathode Ray Tube and Other Display Systems

    DTIC Science & Technology

    1984-07-01

    very rapic y when used with stationary or slow-moving electron beam. Infrared stimulable phosphor, Y-phosphw, Resists burning compared to PI 1...decrement traceable to intense auditory noise has been demonstrated. Noise here refers to 100 dB throughout the task, with equal energy in all...Visual display are low, a variety of auditory stimuli reduces the vigilance decrement. The audio decrement appears to be less over time than does the

  6. Retrosplenial Cortex Is Required for the Retrieval of Remote Memory for Auditory Cues

    ERIC Educational Resources Information Center

    Todd, Travis P.; Mehlman, Max L.; Keene, Christopher S.; DeAngeli, Nicole E.; Bucci, David J.

    2016-01-01

    The retrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of…

  7. The Use of Spatialized Speech in Auditory Interfaces for Computer Users Who Are Visually Impaired

    ERIC Educational Resources Information Center

    Sodnik, Jaka; Jakus, Grega; Tomazic, Saso

    2012-01-01

    Introduction: This article reports on a study that explored the benefits and drawbacks of using spatially positioned synthesized speech in auditory interfaces for computer users who are visually impaired (that is, are blind or have low vision). The study was a practical application of such systems--an enhanced word processing application compared…

  8. Evaluating the Use of Auditory Systems to Improve Performance in Combat Search and Rescue

    DTIC Science & Technology

    2012-03-01

    take advantage of human binaural hearing to present spatial information through auditory stimuli as it would occur in the real world. This allows the...multiple operators unambiguously and in a short amount of time. Spatial audio basics Spatial audio works with human binaural hearing to generate... binaural recordings “sound better” when heard in the same location where the recordings were made. While this appears to be related to the acoustic

  9. Mismatch negativity (MMN) reveals inefficient auditory ventral stream function in chronic auditory comprehension impairments.

    PubMed

    Robson, Holly; Cloutman, Lauren; Keidel, James L; Sage, Karen; Drakesmith, Mark; Welbourne, Stephen

    2014-10-01

    Auditory discrimination is significantly impaired in Wernicke's aphasia (WA) and thought to be causatively related to the language comprehension impairment which characterises the condition. This study used mismatch negativity (MMN) to investigate the neural responses corresponding to successful and impaired auditory discrimination in WA. Behavioural auditory discrimination thresholds of consonant-vowel-consonant (CVC) syllables and pure tones (PTs) were measured in WA (n = 7) and control (n = 7) participants. Threshold results were used to develop multiple deviant MMN oddball paradigms containing deviants which were either perceptibly or non-perceptibly different from the standard stimuli. MMN analysis investigated differences associated with group, condition and perceptibility as well as the relationship between MMN responses and comprehension (within which behavioural auditory discrimination profiles were examined). MMN waveforms were observable to both perceptible and non-perceptible auditory changes. Perceptibility was only distinguished by MMN amplitude in the PT condition. The WA group could be distinguished from controls by an increase in MMN response latency to CVC stimuli change. Correlation analyses displayed a relationship between behavioural CVC discrimination and MMN amplitude in the control group, where greater amplitude corresponded to better discrimination. The WA group displayed the inverse effect; both discrimination accuracy and auditory comprehension scores were reduced with increased MMN amplitude. In the WA group, a further correlation was observed between the lateralisation of MMN response and CVC discrimination accuracy; the greater the bilateral involvement the better the discrimination accuracy. The results from this study provide further evidence for the nature of auditory comprehension impairment in WA and indicate that the auditory discrimination deficit is grounded in a reduced ability to engage in efficient hierarchical processing and the construction of invariant auditory objects. Correlation results suggest that people with chronic WA may rely on an inefficient, noisy right hemisphere auditory stream when attempting to process speech stimuli.

  10. Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration.

    PubMed

    Wahn, Basil; König, Peter

    2015-01-01

    Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.

  11. The effects of ambient music on simulated anaesthesia monitoring.

    PubMed

    Sanderson, P M; Tosh, N; Philp, S; Rudie, J; Watson, M O; Russell, W J

    2005-11-01

    We examined the effect of no music, classical music or rock music on simulated patient monitoring. Twenty-four non-anaesthetist participants with high or low levels of musical training were trained to monitor visual and auditory displays of patients' vital signs. In nine anaesthesia test scenarios, participants were asked every 50-70 s whether one of five vital signs was abnormal and the trend of its direction. Abnormality judgements were unaffected by music or musical training. Trend judgements were more accurate when music was playing (p = 0.0004). Musical participants reported trends more accurately (p = 0.004), and non-musical participants tended to benefit more from music than did the musical participants (p = 0.063). Music may provide a pitch and rhythm standard from which participants can judge changes in vital signs from auditory displays. Nonetheless, both groups reported that it was easier to monitor the patient with no music (p = 0.0001), and easier to rely upon the auditory displays with no music (p = 0.014).

  12. Spatial learning while navigating with severely degraded viewing: The role of attention and mobility monitoring

    PubMed Central

    Rand, Kristina M.; Creem-Regehr, Sarah H.; Thompson, William B.

    2015-01-01

    The ability to navigate without getting lost is an important aspect of quality of life. In five studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low-vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2 participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared to degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared to degraded vision. With degraded vision, auditory task performance was better when guided compared to unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing. PMID:25706766

  13. Central Auditory Nervous System Dysfunction in Echolalic Autistic Individuals.

    ERIC Educational Resources Information Center

    Wetherby, Amy Miller; And Others

    1981-01-01

    The results showed that all the Ss had normal hearing on the monaural speech tests; however, there was indication of central auditory nervous system dysfunction in the language dominant hemisphere, inferred from the dichotic tests, for those Ss displaying echolalia. (Author)

  14. Auditory displays as occasion setters.

    PubMed

    Mckeown, Denis; Isherwood, Sarah; Conway, Gareth

    2010-02-01

    The aim of this study was to evaluate whether representational sounds that capture the richness of experience of a collision enhance performance in braking to avoid a collision relative to other forms of warnings in a driving simulator. There is increasing interest in auditory warnings that are informative about their referents. But as well as providing information about some intended object, warnings may be designed to set the occasion for a rich body of information about the outcomes of behavior in a particular context. These richly informative warnings may offer performance advantages, as they may be rapidly processed by users. An auditory occasion setter for a collision (a recording of screeching brakes indicating imminent collision) was compared with two other auditory warnings (an abstract and an "environmental" sound), a speech message, a visual display, and no warning in a fixed-base driving simulator as interfaces to a collision avoidance system. The main measure was braking response times at each of two headways (1.5 s and 3 s) to a lead vehicle. The occasion setter demonstrated statistically significantly faster braking responses at each headway in 8 out of 10 comparisons (with braking responses equally fast to the abstract warning at 1.5 s and the environmental warning at 3 s). Auditory displays that set the occasion for an outcome in a particular setting and for particular behaviors may offer small but critical performance enhancements in time-critical applications. The occasion setter could be applied in settings where speed of response by users is of the essence.

  15. Auditory Attentional Control and Selection during Cocktail Party Listening

    PubMed Central

    Hill, Kevin T.

    2010-01-01

    In realistic auditory environments, people rely on both attentional control and attentional selection to extract intelligible signals from a cluttered background. We used functional magnetic resonance imaging to examine auditory attention to natural speech under such high processing-load conditions. Participants attended to a single talker in a group of 3, identified by the target talker's pitch or spatial location. A catch-trial design allowed us to distinguish activity due to top-down control of attention versus attentional selection of bottom-up information in both the spatial and spectral (pitch) feature domains. For attentional control, we found a left-dominant fronto-parietal network with a bias toward spatial processing in dorsal precentral sulcus and superior parietal lobule, and a bias toward pitch in inferior frontal gyrus. During selection of the talker, attention modulated activity in left intraparietal sulcus when using talker location and in bilateral but right-dominant superior temporal sulcus when using talker pitch. We argue that these networks represent the sources and targets of selective attention in rich auditory environments. PMID:19574393

  16. Modulation of auditory spatial attention by visual emotional cues: differential effects of attentional engagement and disengagement for pleasant and unpleasant cues.

    PubMed

    Harrison, Neil R; Woodhouse, Rob

    2016-05-01

    Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.

  17. Enhanced pure-tone pitch discrimination among persons with autism but not Asperger syndrome.

    PubMed

    Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A; Mottron, Laurent

    2010-07-01

    Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis. Based on these findings, Samson, Mottron, Jemel, Belin, and Ciocca (2006) proposed to extend the neural complexity hypothesis to the auditory modality. They hypothesized that persons with ASD should display enhanced performance for simple tones that are processed in primary auditory cortical regions, but diminished performance for complex tones that require additional processing in associative auditory regions, in comparison to typically developing individuals. To assess this hypothesis, we designed four auditory discrimination experiments targeting pitch, non-vocal and vocal timbre, and loudness. Stimuli consisted of spectro-temporally simple and complex tones. The participants were adolescents and young adults with autism, Asperger syndrome, and typical developmental histories, all with IQs in the normal range. Consistent with the neural complexity hypothesis and enhanced perceptual functioning model of ASD (Mottron, Dawson, Soulières, Hubert, & Burack, 2006), the participants with autism, but not with Asperger syndrome, displayed enhanced pitch discrimination for simple tones. However, no discrimination-thresholds differences were found between the participants with ASD and the typically developing persons across spectrally and temporally complex conditions. These findings indicate that enhanced pure-tone pitch discrimination may be a cognitive correlate of speech-delay among persons with ASD. However, auditory discrimination among this group does not appear to be directly contingent on the spectro-temporal complexity of the stimuli. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  18. Research and Studies Directory for Manpower, Personnel, and Training

    DTIC Science & Technology

    1989-05-01

    LOUIS MO 314-889-6805 CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX TANGNEY J AIR FORCE OFFICE OF SCIENTIFIC RESEARCH 202-767-5021 A MODEL FOR...VISUAL ATTENTION AUDITORY PERCEPTION OF COMPLEX SOUNDS CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX EYE MOVEMENTS AND SPATIAL PATTERN VISION EYE

  19. Touch-screen technology for the dynamic display of -2D spatial information without vision: promise and progress.

    PubMed

    Klatzky, Roberta L; Giudice, Nicholas A; Bennett, Christopher R; Loomis, Jack M

    2014-01-01

    Many developers wish to capitalize on touch-screen technology for developing aids for the blind, particularly by incorporating vibrotactile stimulation to convey patterns on their surfaces, which otherwise are featureless. Our belief is that they will need to take into account basic research on haptic perception in designing these graphics interfaces. We point out constraints and limitations in haptic processing that affect the use of these devices. We also suggest ways to use sound to augment basic information from touch, and we include evaluation data from users of a touch-screen device with vibrotactile and auditory feedback that we have been developing, called a vibro-audio interface.

  20. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. The influence of the Japanese waving cat on the joint spatial compatibility effect: A replication and extension of Dolk, Hommel, Prinz, and Liepelt (2013).

    PubMed

    Puffe, Lydia; Dittrich, Kerstin; Klauer, Karl Christoph

    2017-01-01

    In a joint go/no-go Simon task, each of two participants is to respond to one of two non-spatial stimulus features by means of a spatially lateralized response. Stimulus position varies horizontally and responses are faster and more accurate when response side and stimulus position match (compatible trial) than when they mismatch (incompatible trial), defining the social Simon effect or joint spatial compatibility effect. This effect was originally explained in terms of action/task co-representation, assuming that the co-actor's action is automatically co-represented. Recent research by Dolk, Hommel, Prinz, and Liepelt (2013) challenged this account by demonstrating joint spatial compatibility effects in a task-setting in which non-social objects like a Japanese waving cat were present, but no real co-actor. They postulated that every sufficiently salient object induces joint spatial compatibility effects. However, what makes an object sufficiently salient is so far not well defined. To scrutinize this open question, the current study manipulated auditory and/or visual attention-attracting cues of a Japanese waving cat within an auditory (Experiment 1) and a visual joint go/no-go Simon task (Experiment 2). Results revealed that joint spatial compatibility effects only occurred in an auditory Simon task when the cat provided auditory cues while no joint spatial compatibility effects were found in a visual Simon task. This demonstrates that it is not the sufficiently salient object alone that leads to joint spatial compatibility effects but instead, a complex interaction between features of the object and the stimulus material of the joint go/no-go Simon task.

  2. Multisensory guidance of orienting behavior.

    PubMed

    Maier, Joost X; Groh, Jennifer M

    2009-12-01

    We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.

  3. Influence of auditory and audiovisual stimuli on the right-left prevalence effect.

    PubMed

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.

  4. Auditory Spatial Perception: Auditory Localization

    DTIC Science & Technology

    2012-05-01

    cochlear nucleus, TB – trapezoid body, SOC – superior olivary complex, LL – lateral lemniscus, IC – inferior colliculus. Adapted from Aharonson and...Figure 5. Auditory pathways in the central nervous system. LE – left ear, RE – right ear, AN – auditory nerve, CN – cochlear nucleus, TB...fibers leaving the left and right inner ear connect directly to the synaptic inputs of the cochlear nucleus (CN) on the same (ipsilateral) side of

  5. Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete.

    PubMed

    Jarick, Michelle; Stewart, Mark T; Smilek, Daniel; Dixon, Michael J

    2013-01-01

    Time-space synaesthetes "see" time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred "auditory" viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the "preferred" auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009).

  6. Linear multivariate evaluation models for spatial perception of soundscape.

    PubMed

    Deng, Zhiyong; Kang, Jian; Wang, Daiwei; Liu, Aili; Kang, Joe Zhengyu

    2015-11-01

    Soundscape is a sound environment that emphasizes the awareness of auditory perception and social or cultural understandings. The case of spatial perception is significant to soundscape. However, previous studies on the auditory spatial perception of the soundscape environment have been limited. Based on 21 native binaural-recorded soundscape samples and a set of auditory experiments for subjective spatial perception (SSP), a study of the analysis among semantic parameters, the inter-aural-cross-correlation coefficient (IACC), A-weighted-equal sound-pressure-level (L(eq)), dynamic (D), and SSP is introduced to verify the independent effect of each parameter and to re-determine some of their possible relationships. The results show that the more noisiness the audience perceived, the worse spatial awareness they received, while the closer and more directional the sound source image variations, dynamics, and numbers of sound sources in the soundscape are, the better the spatial awareness would be. Thus, the sensations of roughness, sound intensity, transient dynamic, and the values of Leq and IACC have a suitable range for better spatial perception. A better spatial awareness seems to promote the preference slightly for the audience. Finally, setting SSPs as functions of the semantic parameters and Leq-D-IACC, two linear multivariate evaluation models of subjective spatial perception are proposed.

  7. Retrosplenial cortex is required for the retrieval of remote memory for auditory cues.

    PubMed

    Todd, Travis P; Mehlman, Max L; Keene, Christopher S; DeAngeli, Nicole E; Bucci, David J

    2016-06-01

    The restrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of the RSC to recently acquired auditory fear memories. Since neocortical regions have been implicated in the permanent storage of remote memories, we examined the contribution of the RSC to remotely acquired auditory fear memories. In Experiment 1, retrieval of a remotely acquired auditory fear memory was impaired when permanent lesions (either electrolytic or neurotoxic) were made several weeks after initial conditioning. In Experiment 2, using a chemogenetic approach, we observed impairments in the retrieval of remote memory for an auditory cue when the RSC was temporarily inactivated during testing. In Experiment 3, after injection of a retrograde tracer into the RSC, we observed labeled cells in primary and secondary auditory cortices, as well as the claustrum, indicating that the RSC receives direct projections from auditory regions. Overall our results indicate the RSC has a critical role in the retrieval of remotely acquired auditory fear memories, and we suggest this is related to the quality of the memory, with less precise memories being RSC dependent. © 2016 Todd et al.; Published by Cold Spring Harbor Laboratory Press.

  8. Comparing perceived auditory width to the visual image of a performing ensemble in contrasting bi-modal environmentsa)

    PubMed Central

    Valente, Daniel L.; Braasch, Jonas; Myrbeck, Shane A.

    2012-01-01

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene. PMID:22280585

  9. Sound localization by echolocating bats

    NASA Astrophysics Data System (ADS)

    Aytekin, Murat

    Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.

  10. Proposals for Enhancing the Auditory Presentation of Passive Sonar Information

    DTIC Science & Technology

    1994-11-01

    Affecting Auditory Displays. NAVTRAEQUIPCEN 80-D-0011/0037-1, Eagle Technology Incorporated, Orlando FL, 1984. 8. MACMILLAN, N. A. and C. D. CREELMAN ... Psychology , Denver, CO, 1971. -7- P148988.PDF [Page: 14 of 15] SECURITY CLASSIFICATION OF FORM (Highest classification of Title, Abstract, Keywords

  11. The effects of spatially separated call components on phonotaxis in túngara frogs: evidence for auditory grouping.

    PubMed

    Farris, Hamilton E; Rand, A Stanley; Ryan, Michael J

    2002-01-01

    Numerous animals across disparate taxa must identify and locate complex acoustic signals imbedded in multiple overlapping signals and ambient noise. A requirement of this task is the ability to group sounds into auditory streams in which sounds are perceived as emanating from the same source. Although numerous studies over the past 50 years have examined aspects of auditory grouping in humans, surprisingly few assays have demonstrated auditory stream formation or the assignment of multicomponent signals to a single source in non-human animals. In our study, we present evidence for auditory grouping in female túngara frogs. In contrast to humans, in which auditory grouping may be facilitated by the cues produced when sounds arrive from the same location, we show that spatial cues play a limited role in grouping, as females group discrete components of the species' complex call over wide angular separations. Furthermore, we show that once grouped the separate call components are weighted differently in recognizing and locating the call, so called 'what' and 'where' decisions, respectively. Copyright 2002 S. Karger AG, Basel

  12. Neurotransmitter involvement in development and maintenance of the auditory space map in the guinea pig superior colliculus.

    PubMed

    Ingham, N J; Thornton, S K; McCrossan, D; Withington, D J

    1998-12-01

    Neurotransmitter involvement in development and maintenance of the auditory space map in the guinea pig superior colliculus. J. Neurophysiol. 80: 2941-2953, 1998. The mammalian superior colliculus (SC) is a complex area of the midbrain in terms of anatomy, physiology, and neurochemistry. The SC bears representations of the major sensory modalites integrated with a motor output system. It is implicated with saccade generation, in behavioral responses to novel sensory stimuli and receives innervation from diverse regions of the brain using many neurotransmitter classes. Ethylene-vinyl acetate copolymer (Elvax-40W polymer) was used here to deliver chronically neurotransmitter receptor antagonists to the SC of the guinea pig to investigate the potential role played by the major neurotransmitter systems in the collicular representation of auditory space. Slices of polymer containing different drugs were implanted onto the SC of guinea pigs before the development of the SC azimuthal auditory space map, at approximately 20 days after birth (DAB). A further group of animals was exposed to aminophosphonopentanoic acid (AP5) at approximately 250 DAB. Azimuthal spatial tuning properties of deep layer multiunits of anesthetized guinea pigs were examined approximately 20 days after implantation of the Elvax polymer. Broadband noise bursts were presented to the animals under anechoic, free-field conditions. Neuronal responses were used to construct polar plots representative of the auditory spatial multiunit receptive fields (MURFs). Animals exposed to control polymer could develop a map of auditory space in the SC comparable with that seen in unimplanted normal animals. Exposure of the SC of young animals to AP5, 6-cyano-7-nitroquinoxaline-2,3-dione, or atropine, resulted in a reduction in the proportion of spatially tuned responses with an increase in the proportion of broadly tuned responses and a degradation in topographic order. Thus N-methyl--aspartate (NMDA) and non-NMDA glutamate receptors and muscarinic acetylcholine receptors appear to play vital roles in the development of the SC auditory space map. A group of animals exposed to AP5 beginning at approximately 250 DAB produced results very similar to those obtained in the young group exposed to AP5. Thus NMDA glutamate receptors also seem to be involved in the maintenance of the SC representation of auditory space in the adult guinea pig. Exposure of the SC of young guinea pigs to gamma-aminobutyric acid (GABA) receptor blocking agents produced some but not total disruption of the spatial tuning of auditory MURFs. Receptive fields were large compared with controls, but a significant degree of topographical organization was maintained. GABA receptors may play a role in the development of fine tuning and sharpening of auditory spatial responses in the SC but not necessarily in the generation of topographical order of the these responses.

  13. Mapping feature-sensitivity and attentional modulation in human auditory cortex with functional magnetic resonance imaging

    PubMed Central

    Paltoglou, Aspasia E; Sumner, Christian J; Hall, Deborah A

    2011-01-01

    Feature-specific enhancement refers to the process by which selectively attending to a particular stimulus feature specifically increases the response in the same region of the brain that codes that stimulus property. Whereas there are many demonstrations of this mechanism in the visual system, the evidence is less clear in the auditory system. The present functional magnetic resonance imaging (fMRI) study examined this process for two complex sound features, namely frequency modulation (FM) and spatial motion. The experimental design enabled us to investigate whether selectively attending to FM and spatial motion enhanced activity in those auditory cortical areas that were sensitive to the two features. To control for attentional effort, the difficulty of the target-detection tasks was matched as closely as possible within listeners. Locations of FM-related and motion-related activation were broadly compatible with previous research. The results also confirmed a general enhancement across the auditory cortex when either feature was being attended to, as compared with passive listening. The feature-specific effects of selective attention revealed the novel finding of enhancement for the nonspatial (FM) feature, but not for the spatial (motion) feature. However, attention to spatial features also recruited several areas outside the auditory cortex. Further analyses led us to conclude that feature-specific effects of selective attention are not statistically robust, and appear to be sensitive to the choice of fMRI experimental design and localizer contrast. PMID:21447093

  14. Psychophysical Evaluation of Three-Dimensional Auditory Displays

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.

    1996-01-01

    This report describes the progress made during the second year of a three-year Cooperative Research Agreement. The CRA proposed a program of applied psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years, we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners'head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on one of these topics, the localization of multiple sources, was reported in the most recent Semi-Annual Progress Report (Appendix A). That same progress report described work on two related topics, the influence of a listener's a-priori knowledge of source characteristics and the discriminability of real and virtual sources. In the period since the last Progress Report we have conducted several new studies to evaluate the effectiveness of a new and simpler method for measuring the HRTF's that are used to synthesize virtual sources and have expanded our studies of multiple sources. The results of this research are described below.

  15. Neural correlates of auditory scene analysis and perception

    PubMed Central

    Cohen, Yale E.

    2014-01-01

    The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354

  16. Knockout Mice for Dyslexia Susceptibility Gene Homologs KIAA0319 and KIAA0319L have Unaffected Neuronal Migration but Display Abnormal Auditory Processing

    PubMed Central

    Guidi, Luiz G; Mattley, Jane; Martinez-Garay, Isabel; Monaco, Anthony P; Linden, Jennifer F; Velayos-Baeza, Antonio

    2017-01-01

    Abstract Developmental dyslexia is a neurodevelopmental disorder that affects reading ability caused by genetic and non-genetic factors. Amongst the susceptibility genes identified to date, KIAA0319 is a prime candidate. RNA-interference experiments in rats suggested its involvement in cortical migration but we could not confirm these findings in Kiaa0319-mutant mice. Given its homologous gene Kiaa0319L (AU040320) has also been proposed to play a role in neuronal migration, we interrogated whether absence of AU040320 alone or together with KIAA0319 affects migration in the developing brain. Analyses of AU040320 and double Kiaa0319;AU040320 knockouts (dKO) revealed no evidence for impaired cortical lamination, neuronal migration, neurogenesis or other anatomical abnormalities. However, dKO mice displayed an auditory deficit in a behavioral gap-in-noise detection task. In addition, recordings of click-evoked auditory brainstem responses revealed suprathreshold deficits in wave III amplitude in AU040320-KO mice, and more general deficits in dKOs. These findings suggest that absence of AU040320 disrupts firing and/or synchrony of activity in the auditory brainstem, while loss of both proteins might affect both peripheral and central auditory function. Overall, these results stand against the proposed role of KIAA0319 and AU040320 in neuronal migration and outline their relationship with deficits in the auditory system. PMID:29045729

  17. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    PubMed Central

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  18. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories.

    PubMed

    Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J

    2015-06-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    PubMed Central

    Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.

    2015-01-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721

  20. Characterization of the Spatial Structure of Local Functional Connectivity Using Multidistance Average Correlation Measures.

    PubMed

    Macià, Dídac; Pujol, Jesus; Blanco-Hinojo, Laura; Martínez-Vilavella, Gerard; Martín-Santos, Rocío; Deus, Joan

    2018-06-01

    There is ample evidence from basic research in neuroscience of the importance of local corticocortical networks. Millimetric resolution is achievable with current functional magnetic resonance imaging (fMRI) scanners and sequences, and consequently a number of "local" activity similarity measures have been defined to describe patterns of segregation and integration at this spatial scale. We have introduced the use of IsoDistant Average Correlation (IDAC), easily defined as the average fMRI temporal correlation of a given voxel with other voxels placed at increasingly separated isodistant intervals, to characterize the curve of local fMRI signal similarities. IDAC curves can be statistically compared using parametric multivariate statistics. Furthermore, by using red-green-blue color coding to display jointly IDAC values belonging to three different distance lags, IDAC curves can also be displayed as multidistance IDAC maps. We applied IDAC analysis to a sample of 41 subjects scanned under two different conditions, a resting state and an auditory-visual continuous stimulation. Multidistance IDAC mapping was able to discriminate between gross anatomofunctional cortical areas and, moreover, was sensitive to modulation between the two brain conditions in areas known to activate and deactivate during audiovisual tasks. Unlike previous fMRI local similarity measures already in use, our approach draws special attention to the continuous smooth pattern of local functional connectivity.

  1. Spatial and temporal relationships of electrocorticographic alpha and gamma activity during auditory processing.

    PubMed

    Potes, Cristhian; Brunner, Peter; Gunduz, Aysegul; Knight, Robert T; Schalk, Gerwin

    2014-08-15

    Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8-12 Hz) and high gamma (70-110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15,107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15,107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p<1e(-8)). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these relationships reflect direct cortico-cortical connections rather than common driving input from subcortical structures such as the thalamus. In summary, our inter-subject analyses defined the spatial and temporal relationships between music-related brain activity in the alpha and high gamma bands. They provide experimental evidence supporting current theories about the putative mechanisms of alpha and gamma activity, i.e., reflections of thalamo-cortical interactions and local cortical neural activity, respectively, and the results are also in agreement with existing functional models of auditory processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete

    PubMed Central

    Jarick, Michelle; Stewart, Mark T.; Smilek, Daniel; Dixon, Michael J.

    2013-01-01

    Time-space synaesthetes “see” time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred “auditory” viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the “preferred” auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009). PMID:24137140

  3. Helmet Electronics & Display System-Upgradeable Protection (HEaDS-UP) Phase III Assessment: Headgear Effects on Auditory Perception

    DTIC Science & Technology

    2013-11-01

    difference between front and rear was less pronounced. Localization errors near 0° and 180° are dominated by front-back confusions because binaural ...used to disambiguate binaural information; therefore, it can be argued that most differences in auditory localization ability resulting from

  4. The Taxiway Navigation and Situation Awareness (T-NASA) System

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Sridhar, Banavar (Technical Monitor)

    1997-01-01

    The goal of NASA's Terminal Area Productivity (TAP) Low-Visibility Landing and Surface Operations (LVLASO) subelement is to improve the efficiency of airport surface operations for commercial aircraft operating in weather conditions to Category IIIB while maintaining a high degree of safety. Currently, surface operations are one of the least technologically sophisticated components of the air transport system, being conducted in the 1990's with the same basic technology as in the 1930's. Pilots are given little or no explicit information about their current position, and routing information is limited to ATC communications and airport charts. In TAP/LVLASO, advanced technologies such as satellite navigation systems, digital data communications, advanced information presentation technology, and ground surveillance systems will be integrated into flight deck displays to enable expeditious and safe traffic movement on the airport surface. The cockpit display suite is called the T-NASA (Taxiway Navigation and Situation Awareness) System. This system has three integrated components: 1) Moving Map track-up airport surface display with own-ship, traffic and graphical route guidance 2) Scene-Linked Symbology - route/taxi information virtually projected via a Head-up Display (HUD) onto the forward scene; and, 3) 3-D Audio Ground Collision Avoidance and Navigation system - spatially-localized auditory traffic and navigation alerts. In the current paper, the design philosophy of the T-NASA system will be presented, and the T-NASA system display components described.

  5. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance.

    PubMed

    Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T

    2012-05-01

    In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.

  6. Good Holders, Bad Shufflers: An Examination of Working Memory Processes and Modalities in Children with and without Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Simone, Ashley N; Bédard, Anne-Claude V; Marks, David J; Halperin, Jeffrey M

    2016-01-01

    The aim of this study was to examine working memory (WM) modalities (visual-spatial and auditory-verbal) and processes (maintenance and manipulation) in children with and without attention-deficit/hyperactivity disorder (ADHD). The sample consisted of 63 8-year-old children with ADHD and an age- and sex-matched non-ADHD comparison group (N=51). Auditory-verbal and visual-spatial WM were assessed using the Digit Span and Spatial Span subtests from the Wechsler Intelligence Scale for Children Integrated - Fourth Edition. WM maintenance and manipulation were assessed via forward and backward span indices, respectively. Data were analyzed using a 3-way Group (ADHD vs. non-ADHD)×Modality (Auditory-Verbal vs. Visual-Spatial)×Condition (Forward vs. Backward) Analysis of Variance (ANOVA). Secondary analyses examined differences between Combined and Predominantly Inattentive ADHD presentations. Significant Group×Condition (p=.02) and Group×Modality (p=.03) interactions indicated differentially poorer performance by those with ADHD on backward relative to forward and visual-spatial relative to auditory-verbal tasks, respectively. The 3-way interaction was not significant. Analyses targeting ADHD presentations yielded a significant Group×Condition interaction (p=.009) such that children with ADHD-Predominantly Inattentive Presentation performed differentially poorer on backward relative to forward tasks compared to the children with ADHD-Combined Presentation. Findings indicate a specific pattern of WM weaknesses (i.e., WM manipulation and visual-spatial tasks) for children with ADHD. Furthermore, differential patterns of WM performance were found for children with ADHD-Predominantly Inattentive versus Combined Presentations. (JINS, 2016, 22, 1-11).

  7. Orienting Auditory Spatial Attention Engages Frontal Eye Fields and Medial Occipital Cortex in Congenitally Blind Humans

    PubMed Central

    Garg, Arun; Schwartz, Daniel; Stevens, Alexander A.

    2007-01-01

    What happens in vision related cortical areas when congenitally blind (CB) individuals orient attention to spatial locations? Previous neuroimaging of sighted individuals has found overlapping activation in a network of frontoparietal areas including frontal eye-fields (FEF), during both overt (with eye movement) and covert (without eye movement) shifts of spatial attention. Since voluntary eye movement planning seems irrelevant in CB, their FEF neurons should be recruited for alternative functions if their attentional role in sighted individuals is only due to eye movement planning. Recent neuroimaging of the blind has also reported activation in medial occipital areas, normally associated with visual processing, during a diverse set of non-visual tasks, but their response to attentional shifts remains poorly understood. Here, we used event-related fMRI to explore FEF and medial occipital areas in CB individuals and sighted controls with eyes closed (SC) performing a covert attention orienting task, using endogenous verbal cues and spatialized auditory targets. We found robust stimulus-locked FEF activation of all CB subjects, similar but stronger than in SC, suggesting that FEF plays a role in endogenous orienting of covert spatial attention even in individuals in whom voluntary eye movements are irrelevant. We also found robust activation in bilateral medial occipital cortex in CB but not in SC subjects. The response decreased below baseline following endogenous verbal cues but increased following auditory targets, suggesting that the medial occipital area in CB does not directly engage during cued orienting of attention but may be recruited for processing of spatialized auditory targets. PMID:17397882

  8. Spatial working memory for locations specified by vision and audition: testing the amodality hypothesis.

    PubMed

    Loomis, Jack M; Klatzky, Roberta L; McHugh, Brendan; Giudice, Nicholas A

    2012-08-01

    Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.

  9. Neuropsychological impairments on the NEPSY-II among children with FASD.

    PubMed

    Rasmussen, Carmen; Tamana, Sukhpreet; Baugh, Lauren; Andrew, Gail; Tough, Suzanne; Zwaigenbaum, Lonnie

    2013-01-01

    We examined the pattern of neuropsychological impairments of children with FASD (compared to controls) on NEPSY-II measures of attention and executive functioning, language, memory, visuospatial processing, and social perception. Participants included 32 children with FASD and 30 typically developing control children, ranging in age from 6 to 16 years. Children were tested on the following subtests of the NEPSY-II: Attention and Executive Functioning (animal sorting, auditory attention/response set, and inhibition), Language (comprehension of instructions and speeded naming), Memory (memory for names/delayed memory for names), Visual-Spatial Processing (arrows), and Social Perception (theory of mind). Groups were compared using MANOVA. Children with FASD were impaired relative to controls on the following subtests: animal sorting, response set, inhibition (naming and switching conditions), comprehension of instructions, speeded naming, and memory for names total and delayed, but group differences were not significant on auditory attention, inhibition (inhibition condition), arrows, and theory of mind. Among the FASD group, IQ scores were not correlated with performance on the NEPSY-II subtests, and there were no significant differences between those with and without comorbid ADHD. The NEPSY-II is an effective and useful tool for measuring a variety of neuropsychological impairments among children with FASD. Children with FASD displayed a pattern of results with impairments (relative to controls) on measures of executive functioning (set shifting, concept formation, and inhibition), language, and memory, and relative strengths on measures of basic attention, visual spatial processing, and social perception.

  10. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.

    PubMed

    Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui

    2015-09-01

    Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness < 2 years) and 40 age- and gender-matched hearing controls underwent functional magnetic resonance imaging during a visuo-spatial delayed recognition task that consisted of encoding, maintenance and recognition stages. The early deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may inhibit cross-modal reorganization in early deaf subjects. Granger causality analysis revealed that, compared to the hearing controls, the deaf subjects had an enhanced net causal flow from the frontal eye field to the superior temporal gyrus. These findings indicate that a top-down mechanism may better account for the cross-modal activation of auditory regions in early deaf subjects.See MacSweeney and Cardin (doi:10/1093/awv197) for a scientific commentary on this article. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    PubMed

    Stone, Scott A; Tata, Matthew S

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  12. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality

    PubMed Central

    Tata, Matthew S.

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518

  13. The effect of contextual auditory stimuli on virtual spatial navigation in patients with focal hemispheric lesions.

    PubMed

    Cogné, Mélanie; Knebel, Jean-François; Klinger, Evelyne; Bindschaedler, Claire; Rapin, Pierre-André; Joseph, Pierre-Alain; Clarke, Stephanie

    2018-01-01

    Topographical disorientation is a frequent deficit among patients suffering from brain injury. Spatial navigation can be explored in this population using virtual reality environments, even in the presence of motor or sensory disorders. Furthermore, the positive or negative impact of specific stimuli can be investigated. We studied how auditory stimuli influence the performance of brain-injured patients in a navigational task, using the Virtual Action Planning-Supermarket (VAP-S) with the addition of contextual ("sonar effect" and "name of product") and non-contextual ("periodic randomised noises") auditory stimuli. The study included 22 patients with a first unilateral hemispheric brain lesion and 17 healthy age-matched control subjects. After a software familiarisation, all subjects were tested without auditory stimuli, with a sonar effect or periodic random sounds in a random order, and with the stimulus "name of product". Contextual auditory stimuli improved patient performance more than control group performance. Contextual stimuli benefited most patients with severe executive dysfunction or with severe unilateral neglect. These results indicate that contextual auditory stimuli are useful in the assessment of navigational abilities in brain-damaged patients and that they should be used in rehabilitation paradigms.

  14. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    PubMed Central

    Kamke, Marc R.; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945

  15. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.

    PubMed

    Kamke, Marc R; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  16. Adaptation to stimulus statistics in the perception and neural representation of auditory space.

    PubMed

    Dahmen, Johannes C; Keating, Peter; Nodal, Fernando R; Schulz, Andreas L; King, Andrew J

    2010-06-24

    Sensory systems are known to adapt their coding strategies to the statistics of their environment, but little is still known about the perceptual implications of such adjustments. We investigated how auditory spatial processing adapts to stimulus statistics by presenting human listeners and anesthetized ferrets with noise sequences in which interaural level differences (ILD) rapidly fluctuated according to a Gaussian distribution. The mean of the distribution biased the perceived laterality of a subsequent stimulus, whereas the distribution's variance changed the listeners' spatial sensitivity. The responses of neurons in the inferior colliculus changed in line with these perceptual phenomena. Their ILD preference adjusted to match the stimulus distribution mean, resulting in large shifts in rate-ILD functions, while their gain adapted to the stimulus variance, producing pronounced changes in neural sensitivity. Our findings suggest that processing of auditory space is geared toward emphasizing relative spatial differences rather than the accurate representation of absolute position.

  17. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory

    PubMed Central

    Wang, Qingcui; Guo, Lu; Bao, Ming; Chen, Lihan

    2015-01-01

    Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ (EM) or ‘group motion’ (GM). In “EM,” the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in “GM,” both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms) from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside). Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of GM as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50–230 ms) in the long glide was perceived to be shorter than that within both the short glide and the ‘gap-transfer’ auditory configurations in the same physical intervals (gaps). The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role. PMID:26042055

  18. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory.

    PubMed

    Wang, Qingcui; Guo, Lu; Bao, Ming; Chen, Lihan

    2015-01-01

    Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: 'element motion' (EM) or 'group motion' (GM). In "EM," the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in "GM," both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms) from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside). Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of GM as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50-230 ms) in the long glide was perceived to be shorter than that within both the short glide and the 'gap-transfer' auditory configurations in the same physical intervals (gaps). The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role.

  19. Action video games improve reading abilities and visual-to-auditory attentional shifting in English-speaking children with dyslexia.

    PubMed

    Franceschini, Sandro; Trevisan, Piergiorgio; Ronconi, Luca; Bertoni, Sara; Colmar, Susan; Double, Kit; Facoetti, Andrea; Gori, Simone

    2017-07-19

    Dyslexia is characterized by difficulties in learning to read and there is some evidence that action video games (AVG), without any direct phonological or orthographic stimulation, improve reading efficiency in Italian children with dyslexia. However, the cognitive mechanism underlying this improvement and the extent to which the benefits of AVG training would generalize to deep English orthography, remain two critical questions. During reading acquisition, children have to integrate written letters with speech sounds, rapidly shifting their attention from visual to auditory modality. In our study, we tested reading skills and phonological working memory, visuo-spatial attention, auditory, visual and audio-visual stimuli localization, and cross-sensory attentional shifting in two matched groups of English-speaking children with dyslexia before and after they played AVG or non-action video games. The speed of words recognition and phonological decoding increased after playing AVG, but not non-action video games. Furthermore, focused visuo-spatial attention and visual-to-auditory attentional shifting also improved only after AVG training. This unconventional reading remediation program also increased phonological short-term memory and phoneme blending skills. Our report shows that an enhancement of visuo-spatial attention and phonological working memory, and an acceleration of visual-to-auditory attentional shifting can directly translate into better reading in English-speaking children with dyslexia.

  20. Towards an Auditory Account of Speech Rhythm: Application of a Model of the Auditory "Primal Sketch" to Two Multi-Language Corpora

    ERIC Educational Resources Information Center

    Lee, Christopher S.; Todd, Neil P. McAngus

    2004-01-01

    The world's languages display important differences in their rhythmic organization; most particularly, different languages seem to privilege different phonological units (mora, syllable, or stress foot) as their basic rhythmic unit. There is now considerable evidence that such differences have important consequences for crucial aspects of language…

  1. Auditory perception and the control of spatially coordinated action of deaf and hearing children.

    PubMed

    Savelsbergh, G J; Netelenbos, J B; Whiting, H T

    1991-03-01

    From birth onwards, auditory stimulation directs and intensifies visual orientation behaviour. In deaf children, by definition, auditory perception cannot take place and cannot, therefore, make a contribution to visual orientation to objects approaching from outside the initial field of view. In experiment 1, a difference in catching ability is demonstrated between deaf and hearing children (10-13 years of age) when the ball approached from the periphery or from outside the field of view. No differences in catching ability between the two groups occurred when the ball approached from within the field of view. A second experiment was conducted in order to determine if differences in catching ability between deaf and hearing children could be attributed to execution of slow orientating movements and/or slow reaction time as a result of the auditory loss. The deaf children showed slower reaction times. No differences were found in movement times between deaf and hearing children. Overall, the findings suggest that a lack of auditory stimulation during development can lead to deficiencies in the coordination of actions such as catching which are both spatially and temporally constrained.

  2. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  3. Spatial selective attention in a complex auditory environment such as polyphonic music.

    PubMed

    Saupe, Katja; Koelsch, Stefan; Rübsamen, Rudolf

    2010-01-01

    To investigate the influence of spatial information in auditory scene analysis, polyphonic music (three parts in different timbres) was composed and presented in free field. Each part contained large falling interval jumps in the melody and the task of subjects was to detect these events in one part ("target part") while ignoring the other parts. All parts were either presented from the same location (0 degrees; overlap condition) or from different locations (-28 degrees, 0 degrees, and 28 degrees or -56 degrees, 0 degrees, and 56 degrees in the azimuthal plane), with the target part being presented either at 0 degrees or at one of the right-sided locations. Results showed that spatial separation of 28 degrees was sufficient for a significant improvement in target detection (i.e., in the detection of large interval jumps) compared to the overlap condition, irrespective of the position (frontal or right) of the target part. A larger spatial separation of the parts resulted in further improvements only if the target part was lateralized. These data support the notion of improvement in the suppression of interfering signals with spatial sound source separation. Additionally, the data show that the position of the relevant sound source influences auditory performance.

  4. The Impact of Early Visual Deprivation on Spatial Hearing: A Comparison between Totally and Partially Visually Deprived Children

    PubMed Central

    Cappagli, Giulia; Finocchietti, Sara; Cocchi, Elena; Gori, Monica

    2017-01-01

    The specific role of early visual deprivation on spatial hearing is still unclear, mainly due to the difficulty of comparing similar spatial skills at different ages and to the difficulty in recruiting young blind children from birth. In this study, the effects of early visual deprivation on the development of auditory spatial localization have been assessed in a group of seven 3–5 years old children with congenital blindness (n = 2; light perception or no perception of light) or low vision (n = 5; visual acuity range 1.1–1.7 LogMAR), with the main aim to understand if visual experience is fundamental to the development of specific spatial skills. Our study led to three main findings: firstly, totally blind children performed overall more poorly compared sighted and low vision children in all the spatial tasks performed; secondly, low vision children performed equally or better than sighted children in the same auditory spatial tasks; thirdly, higher residual levels of visual acuity are positively correlated with better spatial performance in the dynamic condition of the auditory localization task indicating that the more residual vision the better spatial performance. These results suggest that early visual experience has an important role in the development of spatial cognition, even when the visual input during the critical period of visual calibration is partially degraded like in the case of low vision children. Overall these results shed light on the importance of early assessment of spatial impairments in visually impaired children and early intervention to prevent the risk of isolation and social exclusion. PMID:28443040

  5. The ventriloquist in periphery: impact of eccentricity-related reliability on audio-visual localization.

    PubMed

    Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier

    2013-10-28

    The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.

  6. Peripheral hearing loss reduces the ability of children to direct selective attention during multi-talker listening.

    PubMed

    Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin

    2017-07-01

    Restoring normal hearing requires knowledge of how peripheral and central auditory processes are affected by hearing loss. Previous research has focussed primarily on peripheral changes following sensorineural hearing loss, whereas consequences for central auditory processing have received less attention. We examined the ability of hearing-impaired children to direct auditory attention to a voice of interest (based on the talker's spatial location or gender) in the presence of a common form of background noise: the voices of competing talkers (i.e. during multi-talker, or "Cocktail Party" listening). We measured brain activity using electro-encephalography (EEG) when children prepared to direct attention to the spatial location or gender of an upcoming target talker who spoke in a mixture of three talkers. Compared to normally-hearing children, hearing-impaired children showed significantly less evidence of preparatory brain activity when required to direct spatial attention. This finding is consistent with the idea that hearing-impaired children have a reduced ability to prepare spatial attention for an upcoming talker. Moreover, preparatory brain activity was not restored when hearing-impaired children listened with their acoustic hearing aids. An implication of these findings is that steps to improve auditory attention alongside acoustic hearing aids may be required to improve the ability of hearing-impaired children to understand speech in the presence of competing talkers. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Single-trial classification of auditory event-related potentials elicited by stimuli from different spatial directions.

    PubMed

    Cabrera, Alvaro Fuentes; Hoffmann, Pablo Faundez

    2010-01-01

    This study is focused on the single-trial classification of auditory event-related potentials elicited by sound stimuli from different spatial directions. Five naϊve subjects were asked to localize a sound stimulus reproduced over one of 8 loudspeakers placed in a circular array, equally spaced by 45°. The subject was seating in the center of the circular array. Due to the complexity of an eight classes classification, our approach consisted on feeding our classifier with two classes, or spatial directions, at the time. The seven chosen pairs were 0°, which was the loudspeaker directly in front of the subject, with all the other seven directions. The discrete wavelet transform was used to extract features in the time-frequency domain and a support vector machine performed the classification procedure. The average accuracy over all subjects and all pair of spatial directions was 76.5%, σ = 3.6. The results of this study provide evidence that the direction of a sound is encoded in single-trial auditory event-related potentials.

  8. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture

    PubMed Central

    2017-01-01

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238

  9. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    PubMed

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.

  10. Reproduction of auditory and visual standards in monochannel cochlear implant users.

    PubMed

    Kanabus, Magdalena; Szelag, Elzbieta; Kolodziejczyk, Iwona; Szuchnik, Joanna

    2004-01-01

    The temporal reproduction of standard durations ranging from 1 to 9 seconds was investigated in monochannel cochlear implant (CI) users and in normally hearing subjects for the auditory and visual modality. The results showed that the pattern of performance in patients depended on their level of auditory comprehension. Results for CI users, who displayed relatively good auditory comprehension, did not differ from that of normally hearing subjects for both modalities. Patients with poor auditory comprehension significantly overestimated shorter auditory standards (1, 1.5 and 2.5 s), compared to both patients with good comprehension and controls. For the visual modality the between-group comparisons were not significant. These deficits in the reproduction of auditory standards were explained in accordance with both the attentional-gate model and the role of working memory in prospective time judgment. The impairments described above can influence the functioning of the temporal integration mechanism that is crucial for auditory speech comprehension on the level of words and phrases. We postulate that the deficits in time reproduction of short standards may be one of the possible reasons for poor speech understanding in monochannel CI users.

  11. Engineering Data Compendium. Human Perception and Performance. Volume 2

    DTIC Science & Technology

    1988-01-01

    Stimulation 5.1014 5.1004 Auditory Detection in the Presence of Visual Stimulation 5.1015 5.1005 Tactual Detection and Discrimination in the Presence of...Accessory Stimulation 5.1016 5.1006 Tactile Versus Auditory Localization of Sound 5.1007 Spatial Localization in the Presence of Inter- 5.1017...York: Wiley. Cross References 5.1004 Auditory detection in the presence of visual stimulation ; 5.1005 Tactual detection and dis- crimination in

  12. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    PubMed Central

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-01-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli. PMID:27853290

  13. Perception-Production Link in L2 Japanese Vowel Duration: Training with Technology

    ERIC Educational Resources Information Center

    Okuno, Tomoko; Hardison, Debra M.

    2016-01-01

    This study examined factors affecting perception training of vowel duration in L2 Japanese with transfer to production. In a pre-test, training, post-test design, 48 L1 English speakers were assigned to one of three groups: auditory-visual (AV) training using waveform displays, auditory-only (A-only), or no training. Within-group variables were…

  14. The Effects of Various Fidelity Factors on Simulated Helicopter Hover

    DTIC Science & Technology

    1981-01-01

    18 VISUAL DISPLAY ....... ....................... ... 20 §. AUDITORY CUES ........... ........................ 23 • SHIP MOTION MODEL...and DiCarlo, 1974), the evaluation of visual, auditory , and motion cues for helicopter simulation (Parrish, Houck, and Martin, 1977), and the...supply the cue. As the tilt should be supplied subliminally , a forward/aft translation must be used to cue the acceleration’s onset. If only rotation

  15. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    PubMed

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. The plastic ear and perceptual relearning in auditory spatial perception

    PubMed Central

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497

  17. Human factors evaluation of the effectiveness of multi-modality displays in advanced traveler information systems

    DOT National Transportation Integrated Search

    1999-12-01

    To achieve the goals for Advanced Traveler Information Systems (ATIS), significant information will necessarily be provided to the driver. A primary ATIS design issue is the display modality (i.e., visual, auditory, or the combination) selected for p...

  18. Improving visual spatial working memory in younger and older adults: effects of cross-modal cues.

    PubMed

    Curtis, Ashley F; Turner, Gary R; Park, Norman W; Murtha, Susan J E

    2017-11-06

    Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age.  Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.

  19. Switching auditory attention using spatial and non-spatial features recruits different cortical networks.

    PubMed

    Larson, Eric; Lee, Adrian K C

    2014-01-01

    Switching attention between different stimuli of interest based on particular task demands is important in many everyday settings. In audition in particular, switching attention between different speakers of interest that are talking concurrently is often necessary for effective communication. Recently, it has been shown by multiple studies that auditory selective attention suppresses the representation of unwanted streams in auditory cortical areas in favor of the target stream of interest. However, the neural processing that guides this selective attention process is not well understood. Here we investigated the cortical mechanisms involved in switching attention based on two different types of auditory features. By combining magneto- and electro-encephalography (M-EEG) with an anatomical MRI constraint, we examined the cortical dynamics involved in switching auditory attention based on either spatial or pitch features. We designed a paradigm where listeners were cued in the beginning of each trial to switch or maintain attention halfway through the presentation of concurrent target and masker streams. By allowing listeners time to switch during a gap in the continuous target and masker stimuli, we were able to isolate the mechanisms involved in endogenous, top-down attention switching. Our results show a double dissociation between the involvement of right temporoparietal junction (RTPJ) and the left inferior parietal supramarginal part (LIPSP) in tasks requiring listeners to switch attention based on space and pitch features, respectively, suggesting that switching attention based on these features involves at least partially separate processes or behavioral strategies. © 2013 Elsevier Inc. All rights reserved.

  20. Noise-induced tinnitus using individualized gap detection analysis and its relationship with hyperacusis, anxiety, and spatial cognition.

    PubMed

    Pace, Edward; Zhang, Jinsheng

    2013-01-01

    Tinnitus has a complex etiology that involves auditory and non-auditory factors and may be accompanied by hyperacusis, anxiety and cognitive changes. Thus far, investigations of the interrelationship between tinnitus and auditory and non-auditory impairment have yielded conflicting results. To further address this issue, we noise exposed rats and assessed them for tinnitus using a gap detection behavioral paradigm combined with statistically-driven analysis to diagnose tinnitus in individual rats. We also tested rats for hearing detection, responsivity, and loss using prepulse inhibition and auditory brainstem response, and for spatial cognition and anxiety using Morris water maze and elevated plus maze. We found that our tinnitus diagnosis method reliably separated noise-exposed rats into tinnitus((+)) and tinnitus((-)) groups and detected no evidence of tinnitus in tinnitus((-)) and control rats. In addition, the tinnitus((+)) group demonstrated enhanced startle amplitude, indicating hyperacusis-like behavior. Despite these results, neither tinnitus, hyperacusis nor hearing loss yielded any significant effects on spatial learning and memory or anxiety, though a majority of rats with the highest anxiety levels had tinnitus. These findings showed that we were able to develop a clinically relevant tinnitus((+)) group and that our diagnosis method is sound. At the same time, like clinical studies, we found that tinnitus does not always result in cognitive-emotional dysfunction, although tinnitus may predispose subjects to certain impairment like anxiety. Other behavioral assessments may be needed to further define the relationship between tinnitus and anxiety, cognitive deficits, and other impairments.

  1. Neuromagnetic recordings reveal the temporal dynamics of auditory spatial processing in the human cortex.

    PubMed

    Tiitinen, Hannu; Salminen, Nelli H; Palomäki, Kalle J; Mäkinen, Ville T; Alku, Paavo; May, Patrick J C

    2006-03-20

    In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.

  2. Effects of methylphenidate on working memory components: influence of measurement.

    PubMed

    Bedard, Anne-Claude; Jain, Umesh; Johnson, Sheilah Hogg; Tannock, Rosemary

    2007-09-01

    To investigate the effects of methylphenidate (MPH) on components of working memory (WM) in attention-deficit hyperactivity disorder (ADHD) and determine the responsiveness of WM measures to MPH. Participants were a clinical sample of 50 children and adolescents with ADHD, aged 6 to 16 years old, who participated in an acute randomized, double-blind, placebo-controlled, crossover trial with single challenges of three MPH doses. Four components of WM were investigated, which varied in processing demands (storage versus manipulation of information) and modality (auditory-verbal; visual-spatial), each of which was indexed by a minimum of two separate measures. MPH improved the ability to store visual-spatial information irrespective of instrument used, but had no effects on the storage of auditory-verbal information. By contrast, MPH enhanced the ability to manipulate both auditory-verbal and visual-spatial information, although effects were instrument specific in both cases. MPH effects on WM are selective: they vary as a function of WM component and measurement.

  3. Optimal Audiovisual Integration in the Ventriloquism Effect But Pervasive Deficits in Unisensory Spatial Localization in Amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-01-01

    Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.

  4. Local and Global Spatial Organization of Interaural Level Difference and Frequency Preferences in Auditory Cortex

    PubMed Central

    Panniello, Mariangela; King, Andrew J; Dahmen, Johannes C; Walker, Kerry M M

    2018-01-01

    Abstract Despite decades of microelectrode recordings, fundamental questions remain about how auditory cortex represents sound-source location. Here, we used in vivo 2-photon calcium imaging to measure the sensitivity of layer II/III neurons in mouse primary auditory cortex (A1) to interaural level differences (ILDs), the principal spatial cue in this species. Although most ILD-sensitive neurons preferred ILDs favoring the contralateral ear, neurons with either midline or ipsilateral preferences were also present. An opponent-channel decoder accurately classified ILDs using the difference in responses between populations of neurons that preferred contralateral-ear-greater and ipsilateral-ear-greater stimuli. We also examined the spatial organization of binaural tuning properties across the imaged neurons with unprecedented resolution. Neurons driven exclusively by contralateral ear stimuli or by binaural stimulation occasionally formed local clusters, but their binaural categories and ILD preferences were not spatially organized on a more global scale. In contrast, the sound frequency preferences of most neurons within local cortical regions fell within a restricted frequency range, and a tonotopic gradient was observed across the cortical surface of individual mice. These results indicate that the representation of ILDs in mouse A1 is comparable to that of most other mammalian species, and appears to lack systematic or consistent spatial order. PMID:29136122

  5. Compatibility of motion facilitates visuomotor synchronization.

    PubMed

    Hove, Michael J; Spivey, Michael J; Krumhansl, Carol L

    2010-12-01

    Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1, synchronization success rates increased dramatically for spatiotemporal sequences of both geometric and biological forms over flashing sequences. In Experiment 2, synchronization performance was best when target sequences and movements were directionally compatible (i.e., simultaneously down), followed by orthogonal stimuli, and was poorest for incompatible moving stimuli and flashing stimuli. In Experiment 3, synchronization performance was best with auditory sequences, followed by compatible moving stimuli, and was worst for flashing and fading stimuli. Results indicate that visuomotor synchronization improves dramatically with compatible spatial information. However, an auditory advantage in sensorimotor synchronization persists.

  6. Enhanced auditory spatial localization in blind echolocators.

    PubMed

    Vercillo, Tiziana; Milne, Jennifer L; Gori, Monica; Goodale, Melvyn A

    2015-01-01

    Echolocation is the extraordinary ability to represent the external environment by using reflected sound waves from self-generated auditory pulses. Blind human expert echolocators show extremely precise spatial acuity and high accuracy in determining the shape and motion of objects by using echoes. In the current study, we investigated whether or not the use of echolocation would improve the representation of auditory space, which is severely compromised in congenitally blind individuals (Gori et al., 2014). The performance of three blind expert echolocators was compared to that of 6 blind non-echolocators and 11 sighted participants. Two tasks were performed: (1) a space bisection task in which participants judged whether the second of a sequence of three sounds was closer in space to the first or the third sound and (2) a minimum audible angle task in which participants reported which of two sounds presented successively was located more to the right. The blind non-echolocating group showed a severe impairment only in the space bisection task compared to the sighted group. Remarkably, the three blind expert echolocators performed both spatial tasks with similar or even better precision and accuracy than the sighted group. These results suggest that echolocation may improve the general sense of auditory space, most likely through a process of sensory calibration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Neural Correlates of Sound Localization in Complex Acoustic Environments

    PubMed Central

    Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto

    2013-01-01

    Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185

  8. Listening to Sentences in Noise: Revealing Binaural Hearing Challenges in Patients with Schizophrenia.

    PubMed

    Abdul Wahab, Noor Alaudin; Zakaria, Mohd Normani; Abdul Rahman, Abdul Hamid; Sidek, Dinsuhaimi; Wahab, Suzaily

    2017-11-01

    The present, case-control, study investigates binaural hearing performance in schizophrenia patients towards sentences presented in quiet and noise. Participants were twenty-one healthy controls and sixteen schizophrenia patients with normal peripheral auditory functions. The binaural hearing was examined in four listening conditions by using the Malay version of hearing in noise test. The syntactically and semantically correct sentences were presented via headphones to the randomly selected subjects. In each condition, the adaptively obtained reception thresholds for speech (RTS) were used to determine RTS noise composite and spatial release from masking. Schizophrenia patients demonstrated significantly higher mean RTS value relative to healthy controls (p=0.018). The large effect size found in three listening conditions, i.e., in quiet (d=1.07), noise right (d=0.88) and noise composite (d=0.90) indicates statistically significant difference between the groups. However, noise front and noise left conditions show medium (d=0.61) and small (d=0.50) effect size respectively. No statistical difference between groups was noted in regards to spatial release from masking on right (p=0.305) and left (p=0.970) ear. The present findings suggest an abnormal unilateral auditory processing in central auditory pathway in schizophrenia patients. Future studies to explore the role of binaural and spatial auditory processing were recommended.

  9. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    PubMed Central

    Młynarski, Wiktor

    2014-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644

  10. Does a Flatter General Gradient of Visual Attention Explain Peripheral Advantages and Central Deficits in Deaf Adults?

    PubMed Central

    Samar, Vincent J.; Berger, Lauren

    2017-01-01

    Individuals deaf from early age often outperform hearing individuals in the visual periphery on attention-dependent dorsal stream tasks (e.g., spatial localization or movement detection), but sometimes show central visual attention deficits, usually on ventral stream object identification tasks. It has been proposed that early deafness adaptively redirects attentional resources from central to peripheral vision to monitor extrapersonal space in the absence of auditory cues, producing a more evenly distributed attention gradient across visual space. However, little direct evidence exists that peripheral advantages are functionally tied to central deficits, rather than determined by independent mechanisms, and previous studies using several attention tasks typically report peripheral advantages or central deficits, not both. To test the general altered attentional gradient proposal, we employed a novel divided attention paradigm that measured target localization performance along a gradient from parafoveal to peripheral locations, independent of concurrent central object identification performance in prelingually deaf and hearing groups who differed in access to auditory input. Deaf participants without cochlear implants (No-CI), with cochlear implants (CI), and hearing participants identified vehicles presented centrally, and concurrently reported the location of parafoveal (1.4°) and peripheral (13.3°) targets among distractors. No-CI participants but not CI participants showed a central identification accuracy deficit. However, all groups displayed equivalent target localization accuracy at peripheral and parafoveal locations and nearly parallel parafoveal-peripheral gradients. Furthermore, the No-CI group’s central identification deficit remained after statistically controlling peripheral performance; conversely, the parafoveal and peripheral group performance equivalencies remained after controlling central identification accuracy. These results suggest that, in the absence of auditory input, reduced central attentional capacity is not necessarily associated with enhanced peripheral attentional capacity or with flattening of a general attention gradient. Our findings converge with earlier studies suggesting that a general graded trade-off of attentional resources across the visual field does not adequately explain the complex task-dependent spatial distribution of deaf-hearing performance differences reported in the literature. Rather, growing evidence suggests that the spatial distribution of attention-mediated performance in deaf people is determined by sophisticated cross-modal plasticity mechanisms that recruit specific sensory and polymodal cortex to achieve specific compensatory processing goals. PMID:28559861

  11. Defense applications of the CAVE (CAVE automatic virtual environment)

    NASA Astrophysics Data System (ADS)

    Isabelle, Scott K.; Gilkey, Robert H.; Kenyon, Robert V.; Valentino, George; Flach, John M.; Spenny, Curtis H.; Anderson, Timothy R.

    1997-07-01

    The CAVE is a multi-person, room-sized, high-resolution, 3D video and auditory environment, which can be used to present very immersive virtual environment experiences. This paper describes the CAVE technology and the capability of the CAVE system as originally developed at the Electronics Visualization Laboratory of the University of Illinois- Chicago and as more recently implemented by Wright State University (WSU) in the Armstrong Laboratory at Wright- Patterson Air Force Base (WPAFB). One planned use of the WSU/WPAFB CAVE is research addressing the appropriate design of display and control interfaces for controlling uninhabited aerial vehicles. The WSU/WPAFB CAVE has a number of features that make it well-suited to this work: (1) 360 degrees surround, plus floor, high resolution visual displays, (2) virtual spatialized audio, (3) the ability to integrate real and virtual objects, and (4) rapid and flexible reconfiguration. However, even though the CAVE is likely to have broad utility for military applications, it does have certain limitations that may make it less well- suited to applications that require 'natural' haptic feedback, vestibular stimulation, or an ability to interact with close detailed objects.

  12. Effectiveness of enhanced pulse oximetry sonifications for conveying oxygen saturation ranges: a laboratory comparison of five auditory displays.

    PubMed

    Paterson, E; Sanderson, P M; Paterson, N A B; Loeb, R G

    2017-12-01

    Anaesthetists monitor auditory information about a patient's vital signs in an environment that can be noisy and while performing other cognitively demanding tasks. It can be difficult to identify oxygen saturation (SpO2) values using existing pulse oximeter auditory displays (sonifications). In a laboratory setting, we compared the ability of non-clinician participants to detect transitions into and out of an SpO2 target range using five different sonifications while they performed a secondary distractor arithmetic task in the presence of background noise. The control sonification was based on the auditory display of current pulse oximeters and comprised a variable pitch with an alarm. The four experimental conditions included an Alarm Only condition, a Variable pitch only condition, and two conditions using sonifications enhanced with additional sound dimensions. Accuracy to detect SpO2 target transitions was the primary outcome. We found that participants using the two sonifications enhanced with the additional sound dimensions of tremolo and brightness were significantly more accurate (83 and 96%, respectively) at detecting transitions to and from a target SpO2 range than participants using a pitch only sonification plus alarms (57%) as implemented in current pulse oximeters. Enhanced sonifications are more informative than conventional sonification. The implication is that they might allow anaesthetists to judge better when desaturation decreases below, or returns to, a target range. © The Author 2017. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. Cingulate neglect in humans: disruption of contralesional reward learning in right brain damage.

    PubMed

    Lecce, Francesca; Rotondaro, Francesca; Bonnì, Sonia; Carlesimo, Augusto; Thiebaut de Schotten, Michel; Tomaiuolo, Francesco; Doricchi, Fabrizio

    2015-01-01

    Motivational valence plays a key role in orienting spatial attention. Nonetheless, clinical documentation and understanding of motivationally based deficits of spatial orienting in the human is limited. Here in a series of one group-study and two single-case studies, we have examined right brain damaged patients (RBD) with and without left spatial neglect in a spatial reward-learning task, in which the motivational valence of the left contralesional and the right ipsilesional space was contrasted. In each trial two visual boxes were presented, one to the left and one to the right of central fixation. In one session monetary rewards were released more frequently in the box on the left side (75% of trials) whereas in another session they were released more frequently on the right side. In each trial patients were required to: 1) point to each one of the two boxes; 2) choose one of the boxes for obtaining monetary reward; 3) report explicitly the position of reward and whether this position matched or not the original choice. Despite defective spontaneous allocation of attention toward the contralesional space, RBD patients with left spatial neglect showed preserved contralesional reward learning, i.e., comparable to ipsilesional learning and to reward learning displayed by patients without neglect. A notable exception in the group of neglect patients was L.R., who showed no sign of contralesional reward learning in a series of 120 consecutive trials despite being able of reaching learning criterion in only 20 trials in the ipsilesional space. L.R. suffered a cortical-subcortical brain damage affecting the anterior components of the parietal-frontal attentional network and, compared with all other neglect and non-neglect patients, had additional lesion involvement of the medial anterior cingulate cortex (ACC) and of the adjacent sectors of the corpus callosum. In contrast to his lateralized motivational learning deficit, L.R. had no lateral bias in the early phases of attentional processing as he suffered no contralesional visual or auditory extinction on double simultaneous tachistoscopic and dichotic stimulation and detected, with no exception, single contralesional visual and auditory stimuli. In a separate study, we were able to compare L.R. with another RBD patient, G.P., who had a selective lesion in the right ACC, in the adjacent callosal connections and the medial-basal prefrontal cortex. G.P. had no contralesional neglect and displayed normal reward learning both in the left and right side of space. These findings show that contralesional reward learning is generally preserved in RBD patients with left spatial neglect and that this can be exploited in rehabilitation protocols. Contralesional reward learning is severely disrupted in neglect patients when an additional lesion of the ACC is present: however, as demonstrated by the comparison between L.R. and G.P. cases, selective unilateral lesion of the right ACC does not produce motivational neglect for the contralesional space. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Spatial Attention and Audiovisual Interactions in Apparent Motion

    ERIC Educational Resources Information Center

    Sanabria, Daniel; Soto-Faraco, Salvador; Spence, Charles

    2007-01-01

    In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either…

  15. Multisensory Integration Affects Visuo-Spatial Working Memory

    ERIC Educational Resources Information Center

    Botta, Fabiano; Santangelo, Valerio; Raffone, Antonino; Sanabria, Daniel; Lupianez, Juan; Belardinelli, Marta Olivetti

    2011-01-01

    In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial…

  16. Characterizing spatial tuning functions of neurons in the auditory cortex of young and aged monkeys: a new perspective on old data

    PubMed Central

    Engle, James R.; Recanzone, Gregg H.

    2012-01-01

    Age-related hearing deficits are a leading cause of disability among the aged. While some forms of hearing deficits are peripheral in origin, others are centrally mediated. One such deficit is the ability to localize sounds, a critical component for segregating different acoustic objects and events, which is dependent on the auditory cortex. Recent evidence indicates that in aged animals the normal sharpening of spatial tuning between neurons in primary auditory cortex to the caudal lateral field does not occur as it does in younger animals. As a decrease in inhibition with aging is common in the ascending auditory system, it is possible that this lack of spatial tuning sharpening is due to a decrease in inhibition at different periods within the response. It is also possible that spatial tuning was decreased as a consequence of reduced inhibition at non-best locations. In this report we found that aged animals had greater activity throughout the response period, but primarily during the onset of the response. This was most prominent at non-best directions, which is consistent with the hypothesis that inhibition is a primary mechanism for sharpening spatial tuning curves. We also noted that in aged animals the latency of the response was much shorter than in younger animals, which is consistent with a decrease in pre-onset inhibition. These results can be interpreted in the context of a failure of the timing and efficiency of feed-forward thalamo-cortical and cortico-cortical circuits in aged animals. Such a mechanism, if generalized across cortical areas, could play a major role in age-related cognitive decline. PMID:23316160

  17. Reduced auditory efferent activity in childhood selective mutism.

    PubMed

    Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava

    2004-06-01

    Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.

  18. Unconscious Cross-Modal Priming of Auditory Sound Localization by Visual Words

    ERIC Educational Resources Information Center

    Ansorge, Ulrich; Khalid, Shah; Laback, Bernhard

    2016-01-01

    Little is known about the cross-modal integration of unconscious and conscious information. In the current study, we therefore tested whether the spatial meaning of an unconscious visual word, such as "up", influences the perceived location of a subsequently presented auditory target. Although cross-modal integration of unconscious…

  19. Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization

    PubMed Central

    Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2013-01-01

    The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338

  20. Compression of auditory space during forward self-motion.

    PubMed

    Teramoto, Wataru; Sakamoto, Shuichi; Furune, Fumimasa; Gyoba, Jiro; Suzuki, Yôiti

    2012-01-01

    Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point). In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial shifts in the auditory receptive field locations driven by afferent signals from vestibular system.

  1. Cellular and Molecular Underpinnings of Neuronal Assembly in the Central Auditory System during Mouse Development

    PubMed Central

    Di Bonito, Maria; Studer, Michèle

    2017-01-01

    During development, the organization of the auditory system into distinct functional subcircuits depends on the spatially and temporally ordered sequence of neuronal specification, differentiation, migration and connectivity. Regional patterning along the antero-posterior axis and neuronal subtype specification along the dorso-ventral axis intersect to determine proper neuronal fate and assembly of rhombomere-specific auditory subcircuits. By taking advantage of the increasing number of transgenic mouse lines, recent studies have expanded the knowledge of developmental mechanisms involved in the formation and refinement of the auditory system. Here, we summarize several findings dealing with the molecular and cellular mechanisms that underlie the assembly of central auditory subcircuits during mouse development, focusing primarily on the rhombomeric and dorso-ventral origin of auditory nuclei and their associated molecular genetic pathways. PMID:28469562

  2. Noise-Induced Tinnitus Using Individualized Gap Detection Analysis and Its Relationship with Hyperacusis, Anxiety, and Spatial Cognition

    PubMed Central

    Pace, Edward; Zhang, Jinsheng

    2013-01-01

    Tinnitus has a complex etiology that involves auditory and non-auditory factors and may be accompanied by hyperacusis, anxiety and cognitive changes. Thus far, investigations of the interrelationship between tinnitus and auditory and non-auditory impairment have yielded conflicting results. To further address this issue, we noise exposed rats and assessed them for tinnitus using a gap detection behavioral paradigm combined with statistically-driven analysis to diagnose tinnitus in individual rats. We also tested rats for hearing detection, responsivity, and loss using prepulse inhibition and auditory brainstem response, and for spatial cognition and anxiety using Morris water maze and elevated plus maze. We found that our tinnitus diagnosis method reliably separated noise-exposed rats into tinnitus(+) and tinnitus(−) groups and detected no evidence of tinnitus in tinnitus(−) and control rats. In addition, the tinnitus(+) group demonstrated enhanced startle amplitude, indicating hyperacusis-like behavior. Despite these results, neither tinnitus, hyperacusis nor hearing loss yielded any significant effects on spatial learning and memory or anxiety, though a majority of rats with the highest anxiety levels had tinnitus. These findings showed that we were able to develop a clinically relevant tinnitus(+) group and that our diagnosis method is sound. At the same time, like clinical studies, we found that tinnitus does not always result in cognitive-emotional dysfunction, although tinnitus may predispose subjects to certain impairment like anxiety. Other behavioral assessments may be needed to further define the relationship between tinnitus and anxiety, cognitive deficits, and other impairments. PMID:24069375

  3. Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.

    PubMed

    Vercillo, Tiziana; Gori, Monica

    2015-01-01

    The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.

  4. The opponent channel population code of sound location is an efficient representation of natural binaural sounds.

    PubMed

    Młynarski, Wiktor

    2015-05-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.

  5. Cross-modal links among vision, audition, and touch in complex environments.

    PubMed

    Ferris, Thomas K; Sarter, Nadine B

    2008-02-01

    This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.

  6. Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians.

    PubMed

    Clayton, Kameron K; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D; Kidd, Gerald

    2016-01-01

    The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, "cocktail-party" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the "cocktail party problem".

  7. Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians

    PubMed Central

    Clayton, Kameron K.; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D.; Kidd, Gerald

    2016-01-01

    The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, “cocktail-party” like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the “cocktail party problem”. PMID:27384330

  8. Effects of speech intelligibility level on concurrent visual task performance.

    PubMed

    Payne, D G; Peters, L J; Birkmire, D P; Bonto, M A; Anastasi, J S; Wenger, M J

    1994-09-01

    Four experiments were performed to determine if changes in the level of speech intelligibility in an auditory task have an impact on performance in concurrent visual tasks. The auditory task used in each experiment was a memory search task in which subjects memorized a set of words and then decided whether auditorily presented probe items were members of the memorized set. The visual tasks used were an unstable tracking task, a spatial decision-making task, a mathematical reasoning task, and a probability monitoring task. Results showed that performance on the unstable tracking and probability monitoring tasks was unaffected by the level of speech intelligibility on the auditory task, whereas accuracy in the spatial decision-making and mathematical processing tasks was significantly worse at low speech intelligibility levels. The findings are interpreted within the framework of multiple resource theory.

  9. Evoked potential correlates of selective attention with multi-channel auditory inputs

    NASA Technical Reports Server (NTRS)

    Schwent, V. L.; Hillyard, S. A.

    1975-01-01

    Ten subjects were presented with random, rapid sequences of four auditory tones which were separated in pitch and apparent spatial position. The N1 component of the auditory vertex evoked potential (EP) measured relative to a baseline was observed to increase with attention. It was concluded that the N1 enhancement reflects a finely tuned selective attention to one stimulus channel among several concurrent, competing channels. This EP enhancement probably increases with increased information load on the subject.

  10. Auditory rehabilitation after stroke: treatment of auditory processing disorders in stroke patients with personal frequency-modulated (FM) systems.

    PubMed

    Koohi, Nehzat; Vickers, Deborah; Chandrashekar, Hoskote; Tsang, Benjamin; Werring, David; Bamiou, Doris-Eva

    2017-03-01

    Auditory disability due to impaired auditory processing (AP) despite normal pure-tone thresholds is common after stroke, and it leads to isolation, reduced quality of life and physical decline. There are currently no proven remedial interventions for AP deficits in stroke patients. This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. Fifty stroke patients had baseline audiological assessments, AP tests and completed the (modified) Amsterdam Inventory for Auditory Disability and Hearing Handicap Inventory for Elderly questionnaires. Nine out of these 50 patients were diagnosed with disordered AP based on severe deficits in understanding speech in background noise but with normal pure-tone thresholds. These nine patients underwent spatial speech-in-noise testing in a sound-attenuating chamber (the "crescent of sound") with and without FM systems. The signal-to-noise ratio (SNR) for 50% correct speech recognition performance was measured with speech presented from 0° azimuth and competing babble from ±90° azimuth. Spatial release from masking (SRM) was defined as the difference between SNRs measured with co-located speech and babble and SNRs measured with spatially separated speech and babble. The SRM significantly improved when babble was spatially separated from target speech, while the patients had the FM systems in their ears compared to without the FM systems. Personal FM systems may substantially improve speech-in-noise deficits in stroke patients who are not eligible for conventional hearing aids. FMs are feasible in stroke patients and show promise to address impaired AP after stroke. Implications for Rehabilitation This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. All cases significantly improved speech perception in noise with the FM systems, when noise was spatially separated from the speech signal by 90° compared with unaided listening. Personal FM systems are feasible in stroke patients, and may be of benefit in just under 20% of this population, who are not eligible for conventional hearing aids.

  11. Bilateral cochlear implants in children: Effects of auditory experience and deprivation on auditory perception

    PubMed Central

    Litovsky, Ruth Y.; Gordon, Karen

    2017-01-01

    Spatial hearing skills are essential for children as they grow, learn and play. They provide critical cues for determining the locations of sources in the environment, and enable segregation of important sources, such as speech, from background maskers or interferers. Spatial hearing depends on availability of monaural cues and binaural cues. The latter result from integration of inputs arriving at the two ears from sounds that vary in location. The binaural system has exquisite mechanisms for capturing differences between the ears in both time of arrival and intensity. The major cues that are thus referred to as being vital for binaural hearing are: interaural differences in time (ITDs) and interaural differences in levels (ILDs). In children with normal hearing (NH), spatial hearing abilities are fairly well developed by age 4–5 years. In contrast, children who are deaf and hear through cochlear implants (CIs) do not have an opportunity to experience normal, binaural acoustic hearing early in life. These children may function by having to utilize auditory cues that are degraded with regard to numerous stimulus features. In recent years there has been a notable increase in the number of children receiving bilateral CIs, and evidence suggests that while having two CIs helps them function better than when listening through a single CI, they generally perform worse than their NH peers. This paper reviews some of the recent work on bilaterally implanted children. The focus is on measures of spatial hearing, including sound localization, release from masking for speech understanding in noise and binaural sensitivity using research processors. Data from behavioral and electrophysiological studies are included, with a focus on the recent work of the authors and their collaborators. The effects of auditory plasticity and deprivation on the emergence of binaural and spatial hearing are discussed along with evidence for reorganized processing from both behavioral and electrophysiological studies. The consequences of both unilateral and bilateral auditory deprivation during development suggest that the relevant set of issues is highly complex with regard to successes and the limitations experienced by children receiving bilateral cochlear implants. PMID:26828740

  12. An evaluation of unisensory and multisensory adaptive flight-path navigation displays

    NASA Astrophysics Data System (ADS)

    Moroney, Brian W.

    1999-11-01

    The present study assessed the use of unimodal (auditory or visual) and multimodal (audio-visual) adaptive interfaces to aid military pilots in the performance of a precision-navigation flight task when they were confronted with additional information-processing loads. A standard navigation interface was supplemented by adaptive interfaces consisting of either a head-up display based flight director, a 3D virtual audio interface, or a combination of the two. The adaptive interfaces provided information about how to return to the pathway when off course. Using an advanced flight simulator, pilots attempted two navigation scenarios: (A) maintain proper course under normal flight conditions and (B) return to course after their aircraft's position has been perturbed. Pilots flew in the presence or absence of an additional information-processing task presented in either the visual or auditory modality. The additional information-processing tasks were equated in terms of perceived mental workload as indexed by the NASA-TLX. Twelve experienced military pilots (11 men and 1 woman), naive to the purpose of the experiment, participated in the study. They were recruited from Wright-Patterson Air Force Base and had a mean of 2812 hrs. of flight experience. Four navigational interface configurations, the standard visual navigation interface alone (SV), SV plus adaptive visual, SV plus adaptive auditory, and SV plus adaptive visual-auditory composite were combined factorially with three concurrent tasks (CT), the no CT, the visual CT, and the auditory CT, a completely repeated measures design. The adaptive navigation displays were activated whenever the aircraft was more than 450 ft off course. In the normal flight scenario, the adaptive interfaces did not bolster navigation performance in comparison to the standard interface. It is conceivable that the pilots performed quite adequately using the familiar generic interface under normal flight conditions and hence showed no added benefit of the adaptive interfaces. In the return-to-course scenario, the relative advantages of the three adaptive interfaces were dependent upon the nature of the CT in a complex way. In the absence of a CT, recovery heading performance was superior with the adaptive visual and adaptive composite interfaces compared to the adaptive auditory interface. In the context of a visual CT, recovery when using the adaptive composite interface was superior to that when using the adaptive visual interface. Post-experimental inquiry indicated that when faced with a visual CT, the pilots used the auditory component of the multimodal guidance display to detect gross heading errors and the visual component to make more fine-grained heading adjustments. In the context of the auditory CT, navigation performance using the adaptive visual interface tended to be superior to that when using the adaptive auditory interface. Neither CT performance nor NASA-TLX workload level was influenced differentially by the interface configurations. Thus, the potential benefits associated with the proposed interfaces appear to be unaccompanied by negative side effects involving CT interference and workload. The adaptive interface configurations were altered without any direct input from the pilot. Thus, it was feared that pilots might reject the activation of interfaces independent of their control. However, pilots' debriefing comments about the efficacy of the adaptive interface approach were very positive. (Abstract shortened by UMI.)

  13. Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals

    PubMed Central

    Genzel, Daria; Firzlaff, Uwe; Wiegrebe, Lutz

    2016-01-01

    Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating. PMID:27169504

  14. Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals.

    PubMed

    Genzel, Daria; Firzlaff, Uwe; Wiegrebe, Lutz; MacNeilage, Paul R

    2016-08-01

    Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating. Copyright © 2016 the American Physiological Society.

  15. Sensory Substitution: The Spatial Updating of Auditory Scenes "Mimics" the Spatial Updating of Visual Scenes.

    PubMed

    Pasqualotto, Achille; Esenkaya, Tayfun

    2016-01-01

    Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or "soundscapes". Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD).

  16. Auditory display for the blind

    NASA Technical Reports Server (NTRS)

    Fish, R. M. (Inventor)

    1974-01-01

    A system for providing an auditory display of two-dimensional patterns as an aid to the blind is described. It includes a scanning device for producing first and second voltages respectively indicative of the vertical and horizontal positions of the scan and a further voltage indicative of the intensity at each point of the scan and hence of the presence or absence of the pattern at that point. The voltage related to scan intensity controls transmission of the sounds to the subject so that the subject knows that a portion of the pattern is being encountered by the scan when a tone is heard, the subject determining the position of this portion of the pattern in space by the frequency and interaural difference information contained in the tone.

  17. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    PubMed

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source. Copyright © 2018 the authors 0270-6474/18/383252-13$15.00/0.

  18. Auditory Space Perception in Left- and Right-Handers

    ERIC Educational Resources Information Center

    Ocklenburg, Sebastian; Hirnstein, Marco; Hausmann, Markus; Lewald, Jorg

    2010-01-01

    Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via…

  19. The Effects of Auditory Information on 4-Month-Old Infants' Perception of Trajectory Continuity

    ERIC Educational Resources Information Center

    Bremner, J. Gavin; Slater, Alan M.; Johnson, Scott P.; Mason, Uschi C.; Spring, Jo

    2012-01-01

    Young infants perceive an object's trajectory as continuous across occlusion provided the temporal or spatial gap in perception is small. In 3 experiments involving 72 participants the authors investigated the effects of different forms of auditory information on 4-month-olds' perception of trajectory continuity. Provision of dynamic auditory…

  20. Intentional switching in auditory selective attention: Exploring age-related effects in a spatial setup requiring speech perception.

    PubMed

    Oberem, Josefa; Koch, Iring; Fels, Janina

    2017-06-01

    Using a binaural-listening paradigm, age-related differences in the ability to intentionally switch auditory selective attention between two speakers, defined by their spatial location, were examined. Therefore 40 normal-hearing participants (20 young, Ø 24.8years; 20 older Ø 67.8years) were tested. The spatial reproduction of stimuli was provided by headphones using head-related-transfer-functions of an artificial head. Spoken number words of two speakers were presented simultaneously to participants from two out of eight locations on the horizontal plane. Guided by a visual cue indicating the spatial location of the target speaker, the participants were asked to categorize the target's number word into smaller vs. greater than five while ignoring the distractor's speech. Results showed significantly higher reaction times and error rates for older participants. The relative influence of the spatial switch of the target-speaker (switch or repetition of speaker's direction in space) was identical across age groups. Congruency effects (stimuli spoken by target and distractor may evoke the same answer or different answers) were increased for older participants and depend on the target's position. Results suggest that the ability to intentionally switch auditory attention to a new cued location was unimpaired whereas it was generally harder for older participants to suppress processing the distractor's speech. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Exploring the temporal dynamics of sustained and transient spatial attention using steady-state visual evoked potentials.

    PubMed

    Zhang, Dan; Hong, Bo; Gao, Shangkai; Röder, Brigitte

    2017-05-01

    While the behavioral dynamics as well as the functional network of sustained and transient attention have extensively been studied, their underlying neural mechanisms have most often been investigated in separate experiments. In the present study, participants were instructed to perform an audio-visual spatial attention task. They were asked to attend to either the left or the right hemifield and to respond to deviant transient either auditory or visual stimuli. Steady-state visual evoked potentials (SSVEPs) elicited by two task irrelevant pattern reversing checkerboards flickering at 10 and 15 Hz in the left and the right hemifields, respectively, were used to continuously monitor the locus of spatial attention. The amplitude and phase of the SSVEPs were extracted for single trials and were separately analyzed. Sustained attention to one hemifield (spatial attention) as well as to the auditory modality (intermodal attention) increased the inter-trial phase locking of the SSVEP responses, whereas briefly presented visual and auditory stimuli decreased the single-trial SSVEP amplitude between 200 and 500 ms post-stimulus. This transient change of the single-trial amplitude was restricted to the SSVEPs elicited by the reversing checkerboard in the spatially attended hemifield and thus might reflect a transient re-orienting of attention towards the brief stimuli. Thus, the present results demonstrate independent, but interacting neural mechanisms of sustained and transient attentional orienting.

  2. Primitive Auditory Memory Is Correlated with Spatial Unmasking That Is Based on Direct-Reflection Integration

    PubMed Central

    Li, Huahui; Kong, Lingzhi; Wu, Xihong; Li, Liang

    2013-01-01

    In reverberant rooms with multiple-people talking, spatial separation between speech sources improves recognition of attended speech, even though both the head-shadowing and interaural-interaction unmasking cues are limited by numerous reflections. It is the perceptual integration between the direct wave and its reflections that bridges the direct-reflection temporal gaps and results in the spatial unmasking under reverberant conditions. This study further investigated (1) the temporal dynamic of the direct-reflection-integration-based spatial unmasking as a function of the reflection delay, and (2) whether this temporal dynamic is correlated with the listeners’ auditory ability to temporally retain raw acoustic signals (i.e., the fast decaying primitive auditory memory, PAM). The results showed that recognition of the target speech against the speech-masker background is a descending exponential function of the delay of the simulated target reflection. In addition, the temporal extent of PAM is frequency dependent and markedly longer than that for perceptual fusion. More importantly, the temporal dynamic of the speech-recognition function is significantly correlated with the temporal extent of the PAM of low-frequency raw signals. Thus, we propose that a chain process, which links the earlier-stage PAM with the later-stage correlation computation, perceptual integration, and attention facilitation, plays a role in spatially unmasking target speech under reverberant conditions. PMID:23658664

  3. Perceptual and academic patterns of learning-disabled/gifted students.

    PubMed

    Waldron, K A; Saphire, D G

    1992-04-01

    This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.

  4. Short-Term Memory for Space and Time Flexibly Recruit Complementary Sensory-Biased Frontal Lobe Attention Networks.

    PubMed

    Michalka, Samantha W; Kong, Lingqiang; Rosen, Maya L; Shinn-Cunningham, Barbara G; Somers, David C

    2015-08-19

    The frontal lobes control wide-ranging cognitive functions; however, functional subdivisions of human frontal cortex are only coarsely mapped. Here, functional magnetic resonance imaging reveals two distinct visual-biased attention regions in lateral frontal cortex, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), anatomically interdigitated with two auditory-biased attention regions, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic functional connectivity analysis demonstrates that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Interestingly, we observe that spatial and temporal short-term memory (STM), respectively, recruit visual and auditory attention networks in the frontal lobe, independent of sensory modality. These findings not only demonstrate that both sensory modality and information domain influence frontal lobe functional organization, they also demonstrate that spatial processing co-localizes with visual processing and that temporal processing co-localizes with auditory processing in lateral frontal cortex. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Intracranial mapping of auditory perception: event-related responses and electrocortical stimulation.

    PubMed

    Sinai, A; Crone, N E; Wied, H M; Franaszczuk, P J; Miglioretti, D; Boatman-Reich, D

    2009-01-01

    We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping.

  6. Intracranial mapping of auditory perception: Event-related responses and electrocortical stimulation

    PubMed Central

    Sinai, A.; Crone, N.E.; Wied, H.M.; Franaszczuk, P.J.; Miglioretti, D.; Boatman-Reich, D.

    2010-01-01

    Objective We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Methods Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. Results ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60 Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Conclusions Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. Significance These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping. PMID:19070540

  7. Cross-modal detection using various temporal and spatial configurations.

    PubMed

    Schirillo, James A

    2011-01-01

    To better understand temporal and spatial cross-modal interactions, two signal detection experiments were conducted in which an auditory target was sometimes accompanied by an irrelevant flash of light. In the first, a psychometric function for detecting a unisensory auditory target in varying signal-to-noise ratios (SNRs) was derived. Then auditory target detection was measured while an irrelevant light was presented with light/sound stimulus onset asynchronies (SOAs) between 0 and ±700 ms. When the light preceded the sound by 100 ms or was coincident, target detection (d') improved for low SNR conditions. In contrast, for larger SOAs (350 and 700 ms), the behavioral gain resulted from a change in both d' and response criterion (β). However, when the light followed the sound, performance changed little. In the second experiment, observers detected multimodal target sounds at eccentricities of ±8°, and ±24°. Sensitivity benefits occurred at both locations, with a larger change at the more peripheral location. Thus, both temporal and spatial factors affect signal detection measures, effectively parsing sensory and decision-making processes.

  8. Double dissociation of 'what' and 'where' processing in auditory cortex.

    PubMed

    Lomber, Stephen G; Malhotra, Shveta

    2008-05-01

    Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.

  9. An association between auditory-visual synchrony processing and reading comprehension: Behavioral and electrophysiological evidence

    PubMed Central

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2016-01-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060

  10. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    PubMed

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  11. Focused and shifting attention in children with heavy prenatal alcohol exposure.

    PubMed

    Mattson, Sarah N; Calarco, Katherine E; Lang, Aimée R

    2006-05-01

    Attention deficits are a hallmark of the teratogenic effects of alcohol. However, characterization of these deficits remains inconclusive. Children with heavy prenatal alcohol exposure and nonexposed controls were evaluated using a paradigm consisting of three conditions: visual focus, auditory focus, and auditory-visual shift of attention. For the focus conditions, participants responded manually to visual or auditory targets. For the shift condition, participants alternated responses between visual targets and auditory targets. For the visual focus condition, alcohol-exposed children had lower accuracy and slower reaction time for all intertarget intervals (ITIs), while on the auditory focus condition, alcohol-exposed children were less accurate but displayed slower reaction time only on the longest ITI. Finally, for the shift condition, the alcohol-exposed group was accurate but had slowed reaction times. These results indicate that children with heavy prenatal alcohol exposure have pervasive deficits in visual focused attention and deficits in maintaining auditory attention over time. However, no deficits were noted in the ability to disengage and reengage attention when required to shift attention between visual and auditory stimuli, although reaction times to shift were slower. Copyright (c) 2006 APA, all rights reserved.

  12. Brain Metabolism during Hallucination-Like Auditory Stimulation in Schizophrenia

    PubMed Central

    Horga, Guillermo; Fernández-Egea, Emilio; Mané, Anna; Font, Mireia; Schatz, Kelly C.; Falcon, Carles; Lomeña, Francisco; Bernardo, Miguel; Parellada, Eduard

    2014-01-01

    Auditory verbal hallucinations (AVH) in schizophrenia are typically characterized by rich emotional content. Despite the prominent role of emotion in regulating normal perception, the neural interface between emotion-processing regions such as the amygdala and auditory regions involved in perception remains relatively unexplored in AVH. Here, we studied brain metabolism using FDG-PET in 9 remitted patients with schizophrenia that previously reported severe AVH during an acute psychotic episode and 8 matched healthy controls. Participants were scanned twice: (1) at rest and (2) during the perception of aversive auditory stimuli mimicking the content of AVH. Compared to controls, remitted patients showed an exaggerated response to the AVH-like stimuli in limbic and paralimbic regions, including the left amygdala. Furthermore, patients displayed abnormally strong connections between the amygdala and auditory regions of the cortex and thalamus, along with abnormally weak connections between the amygdala and medial prefrontal cortex. These results suggest that abnormal modulation of the auditory cortex by limbic-thalamic structures might be involved in the pathophysiology of AVH and may potentially account for the emotional features that characterize hallucinatory percepts in schizophrenia. PMID:24416328

  13. Auditory Verbal Experience and Agency in Waking, Sleep Onset, REM, and Non-REM Sleep.

    PubMed

    Speth, Jana; Harley, Trevor A; Speth, Clemens

    2017-04-01

    We present one of the first quantitative studies on auditory verbal experiences ("hearing voices") and auditory verbal agency (inner speech, and specifically "talking to (imaginary) voices or characters") in healthy participants across states of consciousness. Tools of quantitative linguistic analysis were used to measure participants' implicit knowledge of auditory verbal experiences (VE) and auditory verbal agencies (VA), displayed in mentation reports from four different states. Analysis was conducted on a total of 569 mentation reports from rapid eye movement (REM) sleep, non-REM sleep, sleep onset, and waking. Physiology was controlled with the nightcap sleep-wake mentation monitoring system. Sleep-onset hallucinations, traditionally at the focus of scientific attention on auditory verbal hallucinations, showed the lowest degree of VE and VA, whereas REM sleep showed the highest degrees. Degrees of different linguistic-pragmatic aspects of VE and VA likewise depend on the physiological states. The quantity and pragmatics of VE and VA are a function of the physiologically distinct state of consciousness in which they are conceived. Copyright © 2016 Cognitive Science Society, Inc.

  14. Sonic morphology: Aesthetic dimensional auditory spatial awareness

    NASA Astrophysics Data System (ADS)

    Whitehouse, Martha M.

    The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.

  15. Differential occipital responses in early- and late-blind individuals during a sound-source discrimination task.

    PubMed

    Voss, Patrice; Gougoux, Frederic; Zatorre, Robert J; Lassonde, Maryse; Lepore, Franco

    2008-04-01

    Blind individuals do not necessarily receive more auditory stimulation than sighted individuals. However, to interact effectively with their environment, they have to rely on non-visual cues (in particular auditory) to a greater extent. Often benefiting from cerebral reorganization, they not only learn to rely more on such cues but also may process them better and, as a result, demonstrate exceptional abilities in auditory spatial tasks. Here we examine the effects of blindness on brain activity, using positron emission tomography (PET), during a sound-source discrimination task (SSDT) in both early- and late-onset blind individuals. This should not only provide an answer to the question of whether the blind manifest changes in brain activity but also allow a direct comparison of the two subgroups performing an auditory spatial task. The task was presented under two listening conditions: one binaural and one monaural. The binaural task did not show any significant behavioural differences between groups, but it demonstrated striate and extrastriate activation in the early-blind groups. A subgroup of early-blind individuals, on the other hand, performed significantly better than all the other groups during the monaural task, and these enhanced skills were correlated with elevated activity within the left dorsal extrastriate cortex. Surprisingly, activation of the right ventral visual pathway, which was significantly activated in the late-blind individuals during the monaural task, was negatively correlated with performance. This suggests the possibility that not all cross-modal plasticity is beneficial. Overall, our results not only support previous findings showing that occipital cortex of early-blind individuals is functionally engaged in spatial auditory processing but also shed light on the impact the age of onset of blindness can have on the ensuing cross-modal plasticity.

  16. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  17. The effect of spatial auditory landmarks on ambulation.

    PubMed

    Karim, Adham M; Rumalla, Kavelin; King, Laurie A; Hullar, Timothy E

    2018-02-01

    The maintenance of balance and posture is a result of the collaborative efforts of vestibular, proprioceptive, and visual sensory inputs, but a fourth neural input, audition, may also improve balance. Here, we tested the hypothesis that auditory inputs function as environmental spatial landmarks whose effectiveness depends on sound localization ability during ambulation. Eight blindfolded normal young subjects performed the Fukuda-Unterberger test in three auditory conditions: silence, white noise played through headphones (head-referenced condition), and white noise played through a loudspeaker placed directly in front at 135 centimeters away from the ear at ear height (earth-referenced condition). For the earth-referenced condition, an additional experiment was performed where the effect of moving the speaker azimuthal position to 45, 90, 135, and 180° was tested. Subjects performed significantly better in the earth-referenced condition than in the head-referenced or silent conditions. Performance progressively decreased over the range from 0° to 135° but all subjects then improved slightly at the 180° compared to the 135° condition. These results suggest that presence of sound dramatically improves the ability to ambulate when vision is limited, but that sound sources must be located in the external environment in order to improve balance. This supports the hypothesis that they act by providing spatial landmarks against which head and body movement and orientation may be compared and corrected. Balance improvement in the azimuthal plane mirrors sensitivity to sound movement at similar positions, indicating that similar auditory mechanisms may underlie both processes. These results may help optimize the use of auditory cues to improve balance in particular patient populations. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Effect of the environment on the dendritic morphology of the rat auditory cortex

    PubMed Central

    Bose, Mitali; Muñoz-Llancao, Pablo; Roychowdhury, Swagata; Nichols, Justin A.; Jakkamsetti, Vikram; Porter, Benjamin; Byrapureddy, Rajasekhar; Salgado, Humberto; Kilgard, Michael P.; Aboitiz, Francisco; Dagnino-Subiabre, Alexies; Atzori, Marco

    2010-01-01

    The present study aimed to identify morphological correlates of environment-induced changes at excitatory synapses of the primary auditory cortex (A1). We used the Golgi-Cox stain technique to compare pyramidal cells dendritic properties of Sprague-Dawley rats exposed to different environmental manipulations. Sholl analysis, dendritic length measures, and spine density counts were used to monitor the effects of sensory deafness and an auditory version of environmental enrichment (EE). We found that deafness decreased apical dendritic length leaving basal dendritic length unchanged, whereas EE selectively increased basal dendritic length without changing apical dendritic length. On the contrary, deafness decreased while EE increased spine density in both basal and apical dendrites of A1 layer 2/3 (LII/III) neurons. To determine whether stress contributed to the observed morphological changes in A1, we studied neural morphology in a restraint-induced model that lacked behaviorally relevant acoustic cues. We found that stress selectively decreased apical dendritic length in the auditory but not in the visual primary cortex. Similar to the acoustic manipulation, stress-induced changes in dendritic length possessed a layer specific pattern displaying LII/III neurons from stressed animals with normal apical dendrites but shorter basal dendrites, while infragranular neurons (layers V and VI) displayed shorter apical dendrites but normal basal dendrites. The same treatment did not induce similar changes in the visual cortex, demonstrating that the auditory cortex is an exquisitely sensitive target of neocortical plasticity, and that prolonged exposure to different acoustic as well as emotional environmental manipulation may produce specific changes in dendritic shape and spine density. PMID:19771593

  19. Effects of cannabis on cognition in patients with MS: a psychometric and MRI study.

    PubMed

    Pavisian, Bennis; MacIntosh, Bradley J; Szilagyi, Greg; Staines, Richard W; O'Connor, Paul; Feinstein, Anthony

    2014-05-27

    To determine functional and structural neuroimaging correlates of cognitive dysfunction associated with cannabis use in multiple sclerosis (MS). In a cross-sectional study, 20 subjects with MS who smoked cannabis and 19 noncannabis users with MS, matched on demographic and neurologic variables, underwent fMRI while completing a test of working memory, the N-Back. Resting-state fMRI and structural MRI data (lesion and normal-appearing brain tissue volumes, diffusion tensor imaging metrics) were also collected. Neuropsychological data pertaining to verbal (Selective Reminding Test Revised) and visual (10/36 Spatial Recall Test) memory, information processing speed (Paced Auditory Serial Addition Test [2- and 3-second versions] and Symbol Digit Modalities Test), and attention (Word List Generation) were obtained. The cannabis group performed more poorly on the more demanding of the Paced Auditory Serial Addition Test tasks (i.e., 2-second version) (p < 0.02) and the 10/36 Spatial Recall Test (p < 0.03). Cannabis users had more diffuse cerebral activation across all N-Back trials and made more errors on the 2-Back task (p < 0.006), during which they displayed increased activation relative to nonusers in parietal (p < 0.007) and anterior cingulate (p < 0.001) regions implicated in working memory. No group differences in resting-state networks or structural MRI variables were found. Patients with MS who smoke cannabis are more cognitively impaired than nonusers. Cannabis further compromises cerebral compensatory mechanisms, already faulty in MS. These imaging data boost the construct validity of the neuropsychological findings and act as a cautionary note to cannabis users and prescribers. © 2014 American Academy of Neurology.

  20. Auditory Brainstem Response to Complex Sounds Predicts Self-Reported Speech-in-Noise Performance

    ERIC Educational Resources Information Center

    Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina

    2013-01-01

    Purpose: To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette,…

  1. Audiovisual Perception of Noise Vocoded Speech in Dyslexic and Non-Dyslexic Adults: The Role of Low-Frequency Visual Modulations

    ERIC Educational Resources Information Center

    Megnin-Viggars, Odette; Goswami, Usha

    2013-01-01

    Visual speech inputs can enhance auditory speech information, particularly in noisy or degraded conditions. The natural statistics of audiovisual speech highlight the temporal correspondence between visual and auditory prosody, with lip, jaw, cheek and head movements conveying information about the speech envelope. Low-frequency spatial and…

  2. Visual attention modulates brain activation to angry voices.

    PubMed

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  3. Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness

    PubMed Central

    Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.

    2014-01-01

    Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752

  4. Electrotactile and vibrotactile displays for sensory substitution systems

    NASA Technical Reports Server (NTRS)

    Kaczmarek, Kurt A.; Webster, John G.; Bach-Y-rita, Paul; Tompkins, Willis J.

    1991-01-01

    Sensory substitution systems provide their users with environmental information through a human sensory channel (eye, ear, or skin) different from that normally used or with the information processed in some useful way. The authors review the methods used to present visual, auditory, and modified tactile information to the skin and discuss present and potential future applications of sensory substitution, including tactile vision substitution (TVS), tactile auditory substitution, and remote tactile sensing or feedback (teletouch). The relevant sensory physiology of the skin, including the mechanisms of normal touch and the mechanisms and sensations associated with electrical stimulation of the skin using surface electrodes (electrotactile, or electrocutaneous, stimulation), is reviewed. The information-processing ability of the tactile sense and its relevance to sensory substitution is briefly summarized. The limitations of current tactile display technologies are discussed.

  5. Sensory Substitution: The Spatial Updating of Auditory Scenes “Mimics” the Spatial Updating of Visual Scenes

    PubMed Central

    Pasqualotto, Achille; Esenkaya, Tayfun

    2016-01-01

    Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD). PMID:27148000

  6. Binding of Verbal and Spatial Features in Auditory Working Memory

    ERIC Educational Resources Information Center

    Maybery, Murray T.; Clissa, Peter J.; Parmentier, Fabrice B. R.; Leung, Doris; Harsa, Grefin; Fox, Allison M.; Jones, Dylan M.

    2009-01-01

    The present study investigated the binding of verbal identity and spatial location in the retention of sequences of spatially distributed acoustic stimuli. Study stimuli varying in verbal content and spatial location (e.g. V[subscript 1]S[subscript 1], V[subscript 2]S[subscript 2], V[subscript 3]S[subscript 3], V[subscript 4]S[subscript 4]) were…

  7. Neural time course of visually enhanced echo suppression.

    PubMed

    Bishop, Christopher W; London, Sam; Miller, Lee M

    2012-10-01

    Auditory spatial perception plays a critical role in day-to-day communication. For instance, listeners utilize acoustic spatial information to segregate individual talkers into distinct auditory "streams" to improve speech intelligibility. However, spatial localization is an exceedingly difficult task in everyday listening environments with numerous distracting echoes from nearby surfaces, such as walls. Listeners' brains overcome this unique challenge by relying on acoustic timing and, quite surprisingly, visual spatial information to suppress short-latency (1-10 ms) echoes through a process known as "the precedence effect" or "echo suppression." In the present study, we employed electroencephalography (EEG) to investigate the neural time course of echo suppression both with and without the aid of coincident visual stimulation in human listeners. We find that echo suppression is a multistage process initialized during the auditory N1 (70-100 ms) and followed by space-specific suppression mechanisms from 150 to 250 ms. Additionally, we find a robust correlate of listeners' spatial perception (i.e., suppressing or not suppressing the echo) over central electrode sites from 300 to 500 ms. Contrary to our hypothesis, vision's powerful contribution to echo suppression occurs late in processing (250-400 ms), suggesting that vision contributes primarily during late sensory or decision making processes. Together, our findings support growing evidence that echo suppression is a slow, progressive mechanism modifiable by visual influences during late sensory and decision making stages. Furthermore, our findings suggest that audiovisual interactions are not limited to early, sensory-level modulations but extend well into late stages of cortical processing.

  8. Into the Wild: Neuroergonomic Differentiation of Hand-Held and Augmented Reality Wearable Displays during Outdoor Navigation with Functional Near Infrared Spectroscopy.

    PubMed

    McKendrick, Ryan; Parasuraman, Raja; Murtza, Rabia; Formwalt, Alice; Baccus, Wendy; Paczynski, Martin; Ayaz, Hasan

    2016-01-01

    Highly mobile computing devices promise to improve quality of life, productivity, and performance. Increased situation awareness and reduced mental workload are two potential means by which this can be accomplished. However, it is difficult to measure these concepts in the "wild". We employed ultra-portable battery operated and wireless functional near infrared spectroscopy (fNIRS) to non-invasively measure hemodynamic changes in the brain's Prefrontal cortex (PFC). Measurements were taken during navigation of a college campus with either a hand-held display, or an Augmented reality wearable display (ARWD). Hemodynamic measures were also paired with secondary tasks of visual perception and auditory working memory to provide behavioral assessment of situation awareness and mental workload. Navigating with an augmented reality wearable display produced the least workload during the auditory working memory task, and a trend for improved situation awareness in our measures of prefrontal hemodynamics. The hemodynamics associated with errors were also different between the two devices. Errors with an augmented reality wearable display were associated with increased prefrontal activity and the opposite was observed for the hand-held display. This suggests that the cognitive mechanisms underlying errors between the two devices differ. These findings show fNIRS is a valuable tool for assessing new technology in ecologically valid settings and that ARWDs offer benefits with regards to mental workload while navigating, and potentially superior situation awareness with improved display design.

  9. Into the Wild: Neuroergonomic Differentiation of Hand-Held and Augmented Reality Wearable Displays during Outdoor Navigation with Functional Near Infrared Spectroscopy

    PubMed Central

    McKendrick, Ryan; Parasuraman, Raja; Murtza, Rabia; Formwalt, Alice; Baccus, Wendy; Paczynski, Martin; Ayaz, Hasan

    2016-01-01

    Highly mobile computing devices promise to improve quality of life, productivity, and performance. Increased situation awareness and reduced mental workload are two potential means by which this can be accomplished. However, it is difficult to measure these concepts in the “wild”. We employed ultra-portable battery operated and wireless functional near infrared spectroscopy (fNIRS) to non-invasively measure hemodynamic changes in the brain’s Prefrontal cortex (PFC). Measurements were taken during navigation of a college campus with either a hand-held display, or an Augmented reality wearable display (ARWD). Hemodynamic measures were also paired with secondary tasks of visual perception and auditory working memory to provide behavioral assessment of situation awareness and mental workload. Navigating with an augmented reality wearable display produced the least workload during the auditory working memory task, and a trend for improved situation awareness in our measures of prefrontal hemodynamics. The hemodynamics associated with errors were also different between the two devices. Errors with an augmented reality wearable display were associated with increased prefrontal activity and the opposite was observed for the hand-held display. This suggests that the cognitive mechanisms underlying errors between the two devices differ. These findings show fNIRS is a valuable tool for assessing new technology in ecologically valid settings and that ARWDs offer benefits with regards to mental workload while navigating, and potentially superior situation awareness with improved display design. PMID:27242480

  10. Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.

    PubMed

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events).

  11. Effects of training and motivation on auditory P300 brain-computer interface performance.

    PubMed

    Baykara, E; Ruf, C A; Fioravanti, C; Käthner, I; Simon, N; Kleih, S C; Kübler, A; Halder, S

    2016-01-01

    Brain-computer interface (BCI) technology aims at helping end-users with severe motor paralysis to communicate with their environment without using the natural output pathways of the brain. For end-users in complete paralysis, loss of gaze control may necessitate non-visual BCI systems. The present study investigated the effect of training on performance with an auditory P300 multi-class speller paradigm. For half of the participants, spatial cues were added to the auditory stimuli to see whether performance can be further optimized. The influence of motivation, mood and workload on performance and P300 component was also examined. In five sessions, 16 healthy participants were instructed to spell several words by attending to animal sounds representing the rows and columns of a 5 × 5 letter matrix. 81% of the participants achieved an average online accuracy of ⩾ 70%. From the first to the fifth session information transfer rates increased from 3.72 bits/min to 5.63 bits/min. Motivation significantly influenced P300 amplitude and online ITR. No significant facilitative effect of spatial cues on performance was observed. Training improves performance in an auditory BCI paradigm. Motivation influences performance and P300 amplitude. The described auditory BCI system may help end-users to communicate independently of gaze control with their environment. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  12. Perceptual integration of faces and voices depends on the interaction of emotional content and spatial frequency.

    PubMed

    Kokinous, Jenny; Tavano, Alessandro; Kotz, Sonja A; Schröger, Erich

    2017-02-01

    The role of spatial frequencies (SF) is highly debated in emotion perception, but previous work suggests the importance of low SFs for detecting emotion in faces. Furthermore, emotion perception essentially relies on the rapid integration of multimodal information from faces and voices. We used EEG to test the functional relevance of SFs in the integration of emotional and non-emotional audiovisual stimuli. While viewing dynamic face-voice pairs, participants were asked to identify auditory interjections, and the electroencephalogram (EEG) was recorded. Audiovisual integration was measured as auditory facilitation, indexed by the extent of the auditory N1 amplitude suppression in audiovisual compared to an auditory only condition. We found an interaction of SF filtering and emotion in the auditory response suppression. For neutral faces, larger N1 suppression ensued in the unfiltered and high SF conditions as compared to the low SF condition. Angry face perception led to a larger N1 suppression in the low SF condition. While the results for the neural faces indicate that perceptual quality in terms of SF content plays a major role in audiovisual integration, the results for angry faces suggest that early multisensory integration of emotional information favors low SF neural processing pathways, overruling the predictive value of the visual signal per se. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. The Influence of Eye Closure on Somatosensory Discrimination: A Trade-off Between Simple Perception and Discrimination.

    PubMed

    Götz, Theresa; Hanke, David; Huonker, Ralph; Weiss, Thomas; Klingner, Carsten; Brodoehl, Stefan; Baumbach, Philipp; Witte, Otto W

    2017-06-01

    We often close our eyes to improve perception. Recent results have shown a decrease of perception thresholds accompanied by an increase in somatosensory activity after eye closure. However, does somatosensory spatial discrimination also benefit from eye closure? We previously showed that spatial discrimination is accompanied by a reduction of somatosensory activity. Using magnetoencephalography, we analyzed the magnitude of primary somatosensory (somatosensory P50m) and primary auditory activity (auditory P50m) during a one-back discrimination task in 21 healthy volunteers. In complete darkness, participants were requested to pay attention to either the somatosensory or auditory stimulation and asked to open or close their eyes every 6.5 min. Somatosensory P50m was reduced during a task requiring the distinguishing of stimulus location changes at the distal phalanges of different fingers. The somatosensory P50m was further reduced and detection performance was higher during eyes open. A similar reduction was found for the auditory P50m during a task requiring the distinguishing of changing tones. The function of eye closure is more than controlling visual input. It might be advantageous for perception because it is an effective way to reduce interference from other modalities, but disadvantageous for spatial discrimination because it requires at least one top-down processing stage. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Quality metrics for sensor images

    NASA Technical Reports Server (NTRS)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.

  15. Preattentive representation of feature conjunctions for concurrent spatially distributed auditory objects.

    PubMed

    Takegata, Rika; Brattico, Elvira; Tervaniemi, Mari; Varyagina, Olga; Näätänen, Risto; Winkler, István

    2005-09-01

    The role of attention in conjoining features of an object has been a topic of much debate. Studies using the mismatch negativity (MMN), an index of detecting acoustic deviance, suggested that the conjunctions of auditory features are preattentively represented in the brain. These studies, however, used sequentially presented sounds and thus are not directly comparable with visual studies of feature integration. Therefore, the current study presented an array of spatially distributed sounds to determine whether the auditory features of concurrent sounds are correctly conjoined without focal attention directed to the sounds. Two types of sounds differing from each other in timbre and pitch were repeatedly presented together while subjects were engaged in a visual n-back working-memory task and ignored the sounds. Occasional reversals of the frequent pitch-timbre combinations elicited MMNs of a very similar amplitude and latency irrespective of the task load. This result suggested preattentive integration of auditory features. However, performance in a subsequent target-search task with the same stimuli indicated the occurrence of illusory conjunctions. The discrepancy between the results obtained with and without focal attention suggests that illusory conjunctions may occur during voluntary access to the preattentively encoded object representations.

  16. Decoding auditory spatial and emotional information encoding using multivariate versus univariate techniques.

    PubMed

    Kryklywy, James H; Macpherson, Ewan A; Mitchell, Derek G V

    2018-04-01

    Emotion can have diverse effects on behaviour and perception, modulating function in some circumstances, and sometimes having little effect. Recently, it was identified that part of the heterogeneity of emotional effects could be due to a dissociable representation of emotion in dual pathway models of sensory processing. Our previous fMRI experiment using traditional univariate analyses showed that emotion modulated processing in the auditory 'what' but not 'where' processing pathway. The current study aims to further investigate this dissociation using a more recently emerging multi-voxel pattern analysis searchlight approach. While undergoing fMRI, participants localized sounds of varying emotional content. A searchlight multi-voxel pattern analysis was conducted to identify activity patterns predictive of sound location and/or emotion. Relative to the prior univariate analysis, MVPA indicated larger overlapping spatial and emotional representations of sound within early secondary regions associated with auditory localization. However, consistent with the univariate analysis, these two dimensions were increasingly segregated in late secondary and tertiary regions of the auditory processing streams. These results, while complimentary to our original univariate analyses, highlight the utility of multiple analytic approaches for neuroimaging, particularly for neural processes with known representations dependent on population coding.

  17. Multisensory connections of monkey auditory cerebral cortex

    PubMed Central

    Smiley, John F.; Falchier, Arnaud

    2009-01-01

    Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628

  18. Attentional models of multitask pilot performance using advanced display technology.

    PubMed

    Wickens, Christopher D; Goh, Juliana; Helleberg, John; Horrey, William J; Talleur, Donald A

    2003-01-01

    In the first part of the reported research, 12 instrument-rated pilots flew a high-fidelity simulation, in which air traffic control presentation of auditory (voice) information regarding traffic and flight parameters was compared with advanced display technology presentation of equivalent information regarding traffic (cockpit display of traffic information) and flight parameters (data link display). Redundant combinations were also examined while pilots flew the aircraft simulation, monitored for outside traffic, and read back communications messages. The data suggested a modest cost for visual presentation over auditory presentation, a cost mediated by head-down visual scanning, and no benefit for redundant presentation. The effects in Part 1 were modeled by multiple-resource and preemption models of divided attention. In the second part of the research, visual scanning in all conditions was fit by an expected value model of selective attention derived from a previous experiment. This model accounted for 94% of the variance in the scanning data and 90% of the variance in a second validation experiment. Actual or potential applications of this research include guidance on choosing the appropriate modality for presenting in-cockpit information and understanding task strategies induced by introducing new aviation technology.

  19. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    PubMed

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  20. A comprehensive three-dimensional cortical map of vowel space.

    PubMed

    Scharinger, Mathias; Idsardi, William J; Poe, Samantha

    2011-12-01

    Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral-medial, anterior-posterior, and inferior-superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom-up information but crucially involves featural-phonetic top-down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.

  1. Missing a trick: Auditory load modulates conscious awareness in audition.

    PubMed

    Fairnie, Jake; Moore, Brian C J; Remington, Anna

    2016-07-01

    In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. A possible role for a paralemniscal auditory pathway in the coding of slow temporal information

    PubMed Central

    Abrams, Daniel A.; Nicol, Trent; Zecker, Steven; Kraus, Nina

    2010-01-01

    Low frequency temporal information present in speech is critical for normal perception, however the neural mechanism underlying the differentiation of slow rates in acoustic signals is not known. Data from the rat trigeminal system suggest that the paralemniscal pathway may be specifically tuned to code low-frequency temporal information. We tested whether this phenomenon occurs in the auditory system by measuring the representation of temporal rate in lemniscal and paralemniscal auditory thalamus and cortex in guinea pig. Similar to the trigeminal system, responses measured in auditory thalamus indicate that slow rates are differentially represented in a paralemniscal pathway. In cortex, both lemniscal and paralemniscal neurons indicated sensitivity to slow rates. We speculate that a paralemniscal pathway in the auditory system may be specifically tuned to code low frequency temporal information present in acoustic signals. These data suggest that somatosensory and auditory modalities have parallel sub-cortical pathways that separately process slow rates and the spatial representation of the sensory periphery. PMID:21094680

  3. Thinking about touch facilitates tactile but not auditory processing.

    PubMed

    Anema, Helen A; de Haan, Alyanne M; Gebuis, Titia; Dijkerman, H Chris

    2012-05-01

    Mental imagery is considered to be important for normal conscious experience. It is most frequently investigated in the visual, auditory and motor domain (imagination of movement), while the studies on tactile imagery (imagination of touch) are scarce. The current study investigated the effect of tactile and auditory imagery on the left/right discriminations of tactile and auditory stimuli. In line with our hypothesis, we observed that after tactile imagery, tactile stimuli were responded to faster as compared to auditory stimuli and vice versa. On average, tactile stimuli were responded to faster as compared to auditory stimuli, and stimuli in the imagery condition were on average responded to slower as compared to baseline performance (left/right discrimination without imagery assignment). The former is probably due to the spatial and somatotopic proximity of the fingers receiving the taps and the thumbs performing the response (button press), the latter to a dual task cost. Together, these results provide the first evidence of a behavioural effect of a tactile imagery assignment on the perception of real tactile stimuli.

  4. Diverse Roles of Axonemal Dyneins in Drosophila Auditory Neuron Function and Mechanical Amplification in Hearing.

    PubMed

    Karak, Somdatta; Jacobs, Julie S; Kittelmann, Maike; Spalthoff, Christian; Katana, Radoslaw; Sivan-Loukianova, Elena; Schon, Michael A; Kernan, Maurice J; Eberl, Daniel F; Göpfert, Martin C

    2015-11-26

    Much like vertebrate hair cells, the chordotonal sensory neurons that mediate hearing in Drosophila are motile and amplify the mechanical input of the ear. Because the neurons bear mechanosensory primary cilia whose microtubule axonemes display dynein arms, we hypothesized that their motility is powered by dyneins. Here, we describe two axonemal dynein proteins that are required for Drosophila auditory neuron function, localize to their primary cilia, and differently contribute to mechanical amplification in hearing. Promoter fusions revealed that the two axonemal dynein genes Dmdnah3 (=CG17150) and Dmdnai2 (=CG6053) are expressed in chordotonal neurons, including the auditory ones in the fly's ear. Null alleles of both dyneins equally abolished electrical auditory neuron responses, yet whereas mutations in Dmdnah3 facilitated mechanical amplification, amplification was abolished by mutations in Dmdnai2. Epistasis analysis revealed that Dmdnah3 acts downstream of Nan-Iav channels in controlling the amplificatory gain. Dmdnai2, in addition to being required for amplification, was essential for outer dynein arms in auditory neuron cilia. This establishes diverse roles of axonemal dyneins in Drosophila auditory neuron function and links auditory neuron motility to primary cilia and axonemal dyneins. Mutant defects in sperm competition suggest that both dyneins also function in sperm motility.

  5. Effect of delayed auditory feedback on normal speakers at two speech rates

    NASA Astrophysics Data System (ADS)

    Stuart, Andrew; Kalinowski, Joseph; Rastatter, Michael P.; Lynch, Kerry

    2002-05-01

    This study investigated the effect of short and long auditory feedback delays at two speech rates with normal speakers. Seventeen participants spoke under delayed auditory feedback (DAF) at 0, 25, 50, and 200 ms at normal and fast rates of speech. Significantly two to three times more dysfluencies were displayed at 200 ms (p<0.05) relative to no delay or the shorter delays. There were significantly more dysfluencies observed at the fast rate of speech (p=0.028). These findings implicate the peripheral feedback system(s) of fluent speakers for the disruptive effects of DAF on normal speech production at long auditory feedback delays. Considering the contrast in fluency/dysfluency exhibited between normal speakers and those who stutter at short and long delays, it appears that speech disruption of normal speakers under DAF is a poor analog of stuttering.

  6. Development of Mobile Accessible Pedestrian Signals (MAPS) for Blind Pedestrians at Signalized Intersections

    DOT National Transportation Integrated Search

    2011-06-01

    People with vision impairment have different perception and spatial cognition as compared to the sighted people. Blind pedestrians primarily rely on auditory, olfactory, or tactile feedback to determine spatial location and find their way. They gener...

  7. Diffusion tensor imaging and MR spectroscopy of microstructural alterations and metabolite concentration changes in the auditory neural pathway of pediatric congenital sensorineural hearing loss patients.

    PubMed

    Wu, Chunxiao; Huang, Lexing; Tan, Hui; Wang, Yanting; Zheng, Hongyi; Kong, Lingmei; Zheng, Wenbin

    2016-05-15

    Our objective was to evaluate age-dependent changes in microstructure and metabolism in the auditory neural pathway, of children with profound sensorineural hearing loss (SNHL), and to differentiate between good and poor surgical outcome cochlear implantation (CI) patients by using diffusion tensor imaging (DTI) and magnetic resonance spectroscopy (MRS). Ninety-two SNHL children (49 males, 43 females; mean age, 4.9 years) were studied by conventional MR imaging, DTI and MRS. Patients were divided into three groups: Group A consisted of children≤1 years old (n=20), Group B consisted of children 1-3 years old (n=31), and group C consisted of children 3-14 years old (n=41). Among the 31 patients (19 males and 12 females, 12m- 14y ) with CI, 18 patients (mean age 4.8±0.7 years) with a categories of auditory performance (CAP) score over five were classified into the good outcome group and 13 patients (mean age, 4.4±0.7 years) with a CAP score below five were classified into the poor outcome group. Two DTI parameters, fractional anisotropy (FA) and apparent diffusion coefficient (ADC), were measured in the superior temporal gyrus (STG) and auditory radiation. Regions of interest for metabolic change measurements were located inside the STG. DTI values were measured based on region-of-interest analysis and MRS values for correlation analysis with CAP scores. Compared with healthy individuals, 92 SNHL patients displayed decreased FA values in the auditory radiation and STG (p<0.05). Only decreased FA values in the auditory radiation was observed in Group A. Decreased FA values in the auditory radiation and STG were both observed in B and C groups. However, in Group C, the N-acetyl aspartate/creatinine ratio in the STG was also significantly decreased (p<0.05). Correlation analyses at 12 months post-operation revealed strong correlations between the FA, in the auditory radiation, and CAP scores (r=0.793, p<0.01). DTI and MRS can be used to evaluate microstructural alterations and metabolite concentration changes in the auditory neural pathway that are not detectable by conventional MR imaging. The observed changes in FA suggest that children with SNHL have a developmental delay in myelination in the auditory neural pathway, and it also display greater metabolite concentration changes in the auditory cortex in older children, suggest that early cochlear implantation might be more effective in restoring hearing in children with SNHL. This article is part of a Special Issue entitled SI: Brain and Memory. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Techniques and applications for binaural sound manipulation in human-machine interfaces

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1990-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  9. Techniques and applications for binaural sound manipulation in human-machine interfaces

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1992-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  10. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    PubMed Central

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463

  11. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    PubMed

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  12. Similarity in Spatial Origin of Information Facilitates Cue Competition and Interference

    ERIC Educational Resources Information Center

    Amundson, Jeffrey C.; Miller, Ralph R.

    2007-01-01

    Two lick suppression studies were conducted with water-deprived rats to investigate the influence of spatial similarity in cue interaction. Experiment 1 assessed the influence of similarity of the spatial origin of competing cues in a blocking procedure. Greater blocking was observed in the condition in which the auditory blocking cue and the…

  13. Neural coding of high-frequency tones

    NASA Technical Reports Server (NTRS)

    Howes, W. L.

    1976-01-01

    Available evidence was presented indicating that neural discharges in the auditory nerve display characteristic periodicities in response to any tonal stimulus including high-frequency stimuli, and that this periodicity corresponds to the subjective pitch.

  14. Beethoven's Last Piano Sonata and Those Who Follow Crocodiles: Cross-Domain Mappings of Auditory Pitch in a Musical Context

    ERIC Educational Resources Information Center

    Eitan, Zohar; Timmers, Renee

    2010-01-01

    Though auditory pitch is customarily mapped in Western cultures onto spatial verticality (high-low), both anthropological reports and cognitive studies suggest that pitch may be mapped onto a wide variety of other domains. We collected a total number of 35 pitch mappings and investigated in four experiments how these mappings are used and…

  15. Near-Independent Capacities and Highly Constrained Output Orders in the Simultaneous Free Recall of Auditory-Verbal and Visuo-Spatial Stimuli

    ERIC Educational Resources Information Center

    Cortis Mack, Cathleen; Dent, Kevin; Ward, Geoff

    2018-01-01

    Three experiments examined the immediate free recall (IFR) of auditory-verbal and visuospatial materials from single-modality and dual-modality lists. In Experiment 1, we presented participants with between 1 and 16 spoken words, with between 1 and 16 visuospatial dot locations, or with between 1 and 16 words "and" dots with synchronized…

  16. Automatic facial mimicry in response to dynamic emotional stimuli in five-month-old infants.

    PubMed

    Isomura, Tomoko; Nakano, Tamami

    2016-12-14

    Human adults automatically mimic others' emotional expressions, which is believed to contribute to sharing emotions with others. Although this behaviour appears fundamental to social reciprocity, little is known about its developmental process. Therefore, we examined whether infants show automatic facial mimicry in response to others' emotional expressions. Facial electromyographic activity over the corrugator supercilii (brow) and zygomaticus major (cheek) of four- to five-month-old infants was measured while they viewed dynamic clips presenting audiovisual, visual and auditory emotions. The audiovisual bimodal emotion stimuli were a display of a laughing/crying facial expression with an emotionally congruent vocalization, whereas the visual/auditory unimodal emotion stimuli displayed those emotional faces/vocalizations paired with a neutral vocalization/face, respectively. Increased activation of the corrugator supercilii muscle in response to audiovisual cries and the zygomaticus major in response to audiovisual laughter were observed between 500 and 1000 ms after stimulus onset, which clearly suggests rapid facial mimicry. By contrast, both visual and auditory unimodal emotion stimuli did not activate the infants' corresponding muscles. These results revealed that automatic facial mimicry is present as early as five months of age, when multimodal emotional information is present. © 2016 The Author(s).

  17. A model of head-related transfer functions based on a state-space analysis

    NASA Astrophysics Data System (ADS)

    Adams, Norman Herkamp

    This dissertation develops and validates a novel state-space method for binaural auditory display. Binaural displays seek to immerse a listener in a 3D virtual auditory scene with a pair of headphones. The challenge for any binaural display is to compute the two signals to supply to the headphones. The present work considers a general framework capable of synthesizing a wide variety of auditory scenes. The framework models collections of head-related transfer functions (HRTFs) simultaneously. This framework improves the flexibility of contemporary displays, but it also compounds the steep computational cost of the display. The cost is reduced dramatically by formulating the collection of HRTFs in the state-space and employing order-reduction techniques to design efficient approximants. Order-reduction techniques based on the Hankel-operator are found to yield accurate low-cost approximants. However, the inter-aural time difference (ITD) of the HRTFs degrades the time-domain response of the approximants. Fortunately, this problem can be circumvented by employing a state-space architecture that allows the ITD to be modeled outside of the state-space. Accordingly, three state-space architectures are considered. Overall, a multiple-input, single-output (MISO) architecture yields the best compromise between performance and flexibility. The state-space approximants are evaluated both empirically and psychoacoustically. An array of truncated FIR filters is used as a pragmatic reference system for comparison. For a fixed cost bound, the state-space systems yield lower approximation error than FIR arrays for D>10, where D is the number of directions in the HRTF collection. A series of headphone listening tests are also performed to validate the state-space approach, and to estimate the minimum order N of indiscriminable approximants. For D = 50, the state-space systems yield order thresholds less than half those of the FIR arrays. Depending upon the stimulus uncertainty, a minimum state-space order of 7≤N≤23 appears to be adequate. In conclusion, the proposed state-space method enables a more flexible and immersive binaural display with low computational cost.

  18. Air pollution is associated with brainstem auditory nuclei pathology and delayed brainstem auditory evoked potentials

    PubMed Central

    Calderón-Garcidueñas, Lilian; D’Angiulli, Amedeo; Kulesza, Randy J; Torres-Jardón, Ricardo; Osnaya, Norma; Romero, Lina; Keefe, Sheyla; Herritt, Lou; Brooks, Diane M; Avila-Ramirez, Jose; Delgado-Chávez, Ricardo; Medina-Cortina, Humberto; González-González, Luis Oscar

    2011-01-01

    We assessed brainstem inflammation in children exposed to air pollutants by comparing brainstem auditory evoked potentials (BAEPs) and blood inflammatory markers in children age 96.3± 8.5 months from highly polluted (n=34) versus a low polluted city (n=17). The brainstems of nine children with accidental deaths were also examined. Children from the highly polluted environment had significant delays in wave III (t(50)=17.038; p<0.0001) and wave V (t(50)=19.730; p<0.0001) but no delay in wave I (p=0.548). They also had significantly longer latencies than controls for interwave intervals I–III, III–V, and I–V (all t(50)> 7.501; p<0.0001), consisting with delayed central conduction time of brainstem neural transmission. Highly exposed children showed significant evidence of inflammatory markers and their auditory and vestibular nuclei accumulated α synuclein and/or β amyloid 1–42. Medial superior olive neurons, critically involved in BAEPs, displayed significant pathology. Children’s exposure to urban air pollution increases their risk for auditory and vestibular impairment. PMID:21458557

  19. Task relevance modulates the behavioural and neural effects of sensory predictions

    PubMed Central

    Friston, Karl J.; Nobre, Anna C.

    2017-01-01

    The brain is thought to generate internal predictions to optimize behaviour. However, it is unclear whether predictions signalling is an automatic brain function or depends on task demands. Here, we manipulated the spatial/temporal predictability of visual targets, and the relevance of spatial/temporal information provided by auditory cues. We used magnetoencephalography (MEG) to measure participants’ brain activity during task performance. Task relevance modulated the influence of predictions on behaviour: spatial/temporal predictability improved spatial/temporal discrimination accuracy, but not vice versa. To explain these effects, we used behavioural responses to estimate subjective predictions under an ideal-observer model. Model-based time-series of predictions and prediction errors (PEs) were associated with dissociable neural responses: predictions correlated with cue-induced beta-band activity in auditory regions and alpha-band activity in visual regions, while stimulus-bound PEs correlated with gamma-band activity in posterior regions. Crucially, task relevance modulated these spectral correlates, suggesting that current goals influence PE and prediction signalling. PMID:29206225

  20. Spatial band-pass filtering aids decoding musical genres from auditory cortex 7T fMRI.

    PubMed

    Sengupta, Ayan; Pollmann, Stefan; Hanke, Michael

    2018-01-01

    Spatial filtering strategies, combined with multivariate decoding analysis of BOLD images, have been used to investigate the nature of the neural signal underlying the discriminability of brain activity patterns evoked by sensory stimulation -- primarily in the visual cortex. Reported evidence indicates that such signals are spatially broadband in nature, and are not primarily comprised of fine-grained activation patterns. However, it is unclear whether this is a general property of the BOLD signal, or whether it is specific to the details of employed analyses and stimuli. Here we performed an analysis of publicly available, high-resolution 7T fMRI on the response BOLD response to musical genres in primary auditory cortex that matches a previously conducted study on decoding visual orientation from V1.  The results show that the pattern of decoding accuracies with respect to different types and levels of spatial filtering is comparable to that obtained from V1, despite considerable differences in the respective cortical circuitry.

  1. Intercepting a sound without vision

    PubMed Central

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  2. Realigning Thunder and Lightning: Temporal Adaptation to Spatiotemporally Distant Events

    PubMed Central

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants’ SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events). PMID:24391928

  3. Influence of aging on human sound localization

    PubMed Central

    Dobreva, Marina S.; O'Neill, William E.

    2011-01-01

    Errors in sound localization, associated with age-related changes in peripheral and central auditory function, can pose threats to self and others in a commonly encountered environment such as a busy traffic intersection. This study aimed to quantify the accuracy and precision (repeatability) of free-field human sound localization as a function of advancing age. Head-fixed young, middle-aged, and elderly listeners localized band-passed targets using visually guided manual laser pointing in a darkened room. Targets were presented in the frontal field by a robotically controlled loudspeaker assembly hidden behind a screen. Broadband targets (0.1–20 kHz) activated all auditory spatial channels, whereas low-pass and high-pass targets selectively isolated interaural time and intensity difference cues (ITDs and IIDs) for azimuth and high-frequency spectral cues for elevation. In addition, to assess the upper frequency limit of ITD utilization across age groups more thoroughly, narrowband targets were presented at 250-Hz intervals from 250 Hz up to ∼2 kHz. Young subjects generally showed horizontal overestimation (overshoot) and vertical underestimation (undershoot) of auditory target location, and this effect varied with frequency band. Accuracy and/or precision worsened in older individuals for broadband, high-pass, and low-pass targets, reflective of peripheral but also central auditory aging. In addition, compared with young adults, middle-aged, and elderly listeners showed pronounced horizontal localization deficiencies (imprecision) for narrowband targets within 1,250–1,575 Hz, congruent with age-related central decline in auditory temporal processing. Findings underscore the distinct neural processing of the auditory spatial cues in sound localization and their selective deterioration with advancing age. PMID:21368004

  4. Laser Stimulation of Single Auditory Nerve Fibers

    PubMed Central

    Littlefield, Philip D.; Vujanovic, Irena; Mundi, Jagmeet; Matic, Agnella Izzo; Richter, Claus-Peter

    2011-01-01

    Objectives/Hypothesis One limitation with cochlear implants is the difficulty stimulating spatially discrete spiral ganglion cell groups because of electrode interactions. Multipolar electrodes have improved on this some, but also at the cost of much higher device power consumption. Recently, it has been shown that spatially selective stimulation of the auditory nerve is possible with a mid-infrared laser aimed at the spiral ganglion via the round window. However, these neurons must be driven at adequate rates for optical radiation to be useful in cochlear implants. We herein use single-fiber recordings to characterize the responses of auditory neurons to optical radiation. Study Design In vivo study using normal-hearing adult gerbils. Methods Two diode lasers were used for stimulation of the auditory nerve. They operated between 1.844 μm and 1.873 μm, with pulse durations of 35 μs to 1,000 μs, and at repetition rates up to 1,000 pulses per second (pps). The laser outputs were coupled to a 200-μm-diameter optical fiber placed against the round window membrane and oriented toward the spiral ganglion. The auditory nerve was exposed through a craniotomy, and recordings were taken from single fibers during acoustic and laser stimulation. Results Action potentials occurred 2.5 ms to 4.0 ms after the laser pulse. The latency jitter was up to 3 ms. Maximum rates of discharge averaged 97 ± 52.5 action potentials per second. The neurons did not strictly respond to the laser at stimulation rates over 100 pps. Conclusions Auditory neurons can be stimulated by a laser beam passing through the round window membrane and driven at rates sufficient for useful auditory information. Optical stimulation and electrical stimulation have different characteristics; which could be selectively exploited in future cochlear implants. Level of Evidence Not applicable. PMID:20830761

  5. Method and apparatus for displaying information

    NASA Technical Reports Server (NTRS)

    Huang, Sui (Inventor); Eichler, Gabriel (Inventor); Ingber, Donald E. (Inventor)

    2010-01-01

    A method for displaying large amounts of information. The method includes the steps of forming a spatial layout of tiles each corresponding to a representative reference element; mapping observed elements onto the spatial layout of tiles of representative reference elements; assigning a respective value to each respective tile of the spatial layout of the representative elements; and displaying an image of the spatial layout of tiles of representative elements. Each tile includes atomic attributes of representative elements. The invention also relates to an apparatus for displaying large amounts of information. The apparatus includes a tiler forming a spatial layout of tiles, each corresponding to a representative reference element; a comparator mapping observed elements onto said spatial layout of tiles of representative reference elements; an assigner assigning a respective value to each respective tile of said spatial layout of representative reference elements; and a display displaying an image of the spatial layout of tiles of representative reference elements.

  6. Crossmodal attention switching: auditory dominance in temporal discrimination tasks.

    PubMed

    Lukas, Sarah; Philipp, Andrea M; Koch, Iring

    2014-11-01

    Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Eye-movements intervening between two successive sounds disrupt comparisons of auditory location

    PubMed Central

    Pavani, Francesco; Husain, Masud; Driver, Jon

    2008-01-01

    Summary Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here we studied instead whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5 secs delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array), or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d′) for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location. PMID:18566808

  8. Eye-movements intervening between two successive sounds disrupt comparisons of auditory location.

    PubMed

    Pavani, Francesco; Husain, Masud; Driver, Jon

    2008-08-01

    Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here, we studied, instead, whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5-s delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array) or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment 1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in thn the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d') for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location.

  9. An investigation of spatial representation of pitch in individuals with congenital amusia.

    PubMed

    Lu, Xuejing; Sun, Yanan; Thompson, William Forde

    2017-09-01

    Spatial representation of pitch plays a central role in auditory processing. However, it is unknown whether impaired auditory processing is associated with impaired pitch-space mapping. Experiment 1 examined spatial representation of pitch in individuals with congenital amusia using a stimulus-response compatibility (SRC) task. For amusic and non-amusic participants, pitch classification was faster and more accurate when correct responses involved a physical action that was spatially congruent with the pitch height of the stimulus than when it was incongruent. However, this spatial representation of pitch was not as stable in amusic individuals, revealed by slower response times when compared with control individuals. One explanation is that the SRC effect in amusics reflects a linguistic association, requiring additional time to link pitch height and spatial location. To test this possibility, Experiment 2 employed a colour-classification task. Participants judged colour while ignoring a concurrent pitch by pressing one of two response keys positioned vertically to be congruent or incongruent with the pitch. The association between pitch and space was found in both groups, with comparable response times in the two groups, suggesting that amusic individuals are only slower to respond to tasks involving explicit judgments of pitch.

  10. Spatial and identity negative priming in audition: evidence of feature binding in auditory spatial memory.

    PubMed

    Mayr, Susanne; Buchner, Axel; Möller, Malte; Hauke, Robert

    2011-08-01

    Two experiments are reported with identical auditory stimulation in three-dimensional space but with different instructions. Participants localized a cued sound (Experiment 1) or identified a sound at a cued location (Experiment 2). A distractor sound at another location had to be ignored. The prime distractor and the probe target sound were manipulated with respect to sound identity (repeated vs. changed) and location (repeated vs. changed). The localization task revealed a symmetric pattern of partial repetition costs: Participants were impaired on trials with identity-location mismatches between the prime distractor and probe target-that is, when either the sound was repeated but not the location or vice versa. The identification task revealed an asymmetric pattern of partial repetition costs: Responding was slowed down when the prime distractor sound was repeated as the probe target, but at another location; identity changes at the same location were not impaired. Additionally, there was evidence of retrieval of incompatible prime responses in the identification task. It is concluded that feature binding of auditory prime distractor information takes place regardless of whether the task is to identify or locate a sound. Instructions determine the kind of identity-location mismatch that is detected. Identity information predominates over location information in auditory memory.

  11. Hearing shapes our perception of time: temporal discrimination of tactile stimuli in deaf people.

    PubMed

    Bolognini, Nadia; Cecchetto, Carlo; Geraci, Carlo; Maravita, Angelo; Pascual-Leone, Alvaro; Papagno, Costanza

    2012-02-01

    Confronted with the loss of one type of sensory input, we compensate using information conveyed by other senses. However, losing one type of sensory information at specific developmental times may lead to deficits across all sensory modalities. We addressed the effect of auditory deprivation on the development of tactile abilities, taking into account changes occurring at the behavioral and cortical level. Congenitally deaf and hearing individuals performed two tactile tasks, the first requiring the discrimination of the temporal duration of touches and the second requiring the discrimination of their spatial length. Compared with hearing individuals, deaf individuals were impaired only in tactile temporal processing. To explore the neural substrate of this difference, we ran a TMS experiment. In deaf individuals, the auditory association cortex was involved in temporal and spatial tactile processing, with the same chronometry as the primary somatosensory cortex. In hearing participants, the involvement of auditory association cortex occurred at a later stage and selectively for temporal discrimination. The different chronometry in the recruitment of the auditory cortex in deaf individuals correlated with the tactile temporal impairment. Thus, early hearing experience seems to be crucial to develop an efficient temporal processing across modalities, suggesting that plasticity does not necessarily result in behavioral compensation.

  12. Spatial Attention Modulates the Precedence Effect

    ERIC Educational Resources Information Center

    London, Sam; Bishop, Christopher W.; Miller, Lee M.

    2012-01-01

    Communication and navigation in real environments rely heavily on the ability to distinguish objects in acoustic space. However, auditory spatial information is often corrupted by conflicting cues and noise such as acoustic reflections. Fortunately the brain can apply mechanisms at multiple levels to emphasize target information and mitigate such…

  13. Cerebral responses to local and global auditory novelty under general anesthesia

    PubMed Central

    Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir

    2017-01-01

    Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices. PMID:27502046

  14. A novel hybrid auditory BCI paradigm combining ASSR and P300.

    PubMed

    Kaongoen, Netiwit; Jo, Sungho

    2017-03-01

    Brain-computer interface (BCI) is a technology that provides an alternative way of communication by translating brain activities into digital commands. Due to the incapability of using the vision-dependent BCI for patients who have visual impairment, auditory stimuli have been used to substitute the conventional visual stimuli. This paper introduces a hybrid auditory BCI that utilizes and combines auditory steady state response (ASSR) and spatial-auditory P300 BCI to improve the performance for the auditory BCI system. The system works by simultaneously presenting auditory stimuli with different pitches and amplitude modulation (AM) frequencies to the user with beep sounds occurring randomly between all sound sources. Attention to different auditory stimuli yields different ASSR and beep sounds trigger the P300 response when they occur in the target channel, thus the system can utilize both features for classification. The proposed ASSR/P300-hybrid auditory BCI system achieves 85.33% accuracy with 9.11 bits/min information transfer rate (ITR) in binary classification problem. The proposed system outperformed the P300 BCI system (74.58% accuracy with 4.18 bits/min ITR) and the ASSR BCI system (66.68% accuracy with 2.01 bits/min ITR) in binary-class problem. The system is completely vision-independent. This work demonstrates that combining ASSR and P300 BCI into a hybrid system could result in a better performance and could help in the development of the future auditory BCI. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Change of temporal-order judgment of sounds during long-lasting exposure to large-field visual motion.

    PubMed

    Teramoto, Wataru; Watanabe, Hiroshi; Umemura, Hiroyuki

    2008-01-01

    The perceived temporal order of external successive events does not always follow their physical temporal order. We examined the contribution of self-motion mechanisms in the perception of temporal order in the auditory modality. We measured perceptual biases in the judgment of the temporal order of two short sounds presented successively, while participants experienced visually induced self-motion (yaw-axis circular vection) elicited by viewing long-lasting large-field visual motion. In experiment 1, a pair of white-noise patterns was presented to participants at various stimulus-onset asynchronies through headphones, while they experienced visually induced self-motion. Perceived temporal order of auditory events was modulated by the direction of the visual motion (or self-motion). Specifically, the sound presented to the ear in the direction opposite to the visual motion (ie heading direction) was perceived prior to the sound presented to the ear in the same direction. Experiments 2A and 2B were designed to reduce the contributions of decisional and/or response processes. In experiment 2A, the directional cueing of the background (left or right) and the response dimension (high pitch or low pitch) were not spatially associated. In experiment 2B, participants were additionally asked to report which of the two sounds was perceived 'second'. Almost the same results as in experiment 1 were observed, suggesting that the change in temporal order of auditory events during large-field visual motion reflects a change in perceptual processing. Experiment 3 showed that the biases in the temporal-order judgments of auditory events were caused by concurrent actual self-motion with a rotatory chair. In experiment 4, using a small display, we showed that 'pure' long exposure to visual motion without the sensation of self-motion was not responsible for this phenomenon. These results are consistent with previous studies reporting a change in the perceived temporal order of visual or tactile events depending on the direction of self-motion. Hence, large-field induced (ie optic flow) self-motion can affect the temporal order of successive external events across various modalities.

  16. Nonverbal spatially selective attention in 4- and 5-year-old children.

    PubMed

    Sanders, Lisa D; Zobel, Benjamin H

    2012-07-01

    Under some conditions 4- and 5-year-old children can differentially process sounds from attended and unattended locations. In fact, the latency of spatially selective attention effects on auditory processing as measured with event-related potentials (ERPs) is quite similar in young children and adults. However, it is not clear if developmental differences in the polarity, distribution, and duration of attention effects are best attributed to acoustic characteristics, availability of non-spatial attention cues, task demands, or domain. In the current study adults and children were instructed to attend to one of two simultaneously presented soundscapes (e.g., city sounds or night sounds) to detect targets (e.g., car horn or owl hoot) in the attended channel only. Probes presented from the same location as the attended soundscape elicited a larger negativity by 80 ms after onset in both adults and children. This initial negative difference (Nd) was followed by a larger positivity for attended probes in adults and another negativity for attended probes in children. The results indicate that the neural systems by which attention modulates early auditory processing are available for young children even when presented with nonverbal sounds. They also suggest important interactions between attention, acoustic characteristics, and maturity on auditory evoked potentials. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Audio-Visual, Visuo-Tactile and Audio-Tactile Correspondences in Preschoolers.

    PubMed

    Nava, Elena; Grassi, Massimo; Turati, Chiara

    2016-01-01

    Interest in crossmodal correspondences has recently seen a renaissance thanks to numerous studies in human adults. Yet, still very little is known about crossmodal correspondences in children, particularly in sensory pairings other than audition and vision. In the current study, we investigated whether 4-5-year-old children match auditory pitch to the spatial motion of visual objects (audio-visual condition). In addition, we investigated whether this correspondence extends to touch, i.e., whether children also match auditory pitch to the spatial motion of touch (audio-tactile condition) and the spatial motion of visual objects to touch (visuo-tactile condition). In two experiments, two different groups of children were asked to indicate which of two stimuli fitted best with a centrally located third stimulus (Experiment 1), or to report whether two presented stimuli fitted together well (Experiment 2). We found sensitivity to the congruency of all of the sensory pairings only in Experiment 2, suggesting that only under specific circumstances can these correspondences be observed. Our results suggest that pitch-height correspondences for audio-visual and audio-tactile combinations may still be weak in preschool children, and speculate that this could be due to immature linguistic and auditory cues that are still developing at age five.

  18. Preconditioning of Spatial and Auditory Cues: Roles of the Hippocampus, Frontal Cortex, and Cue-Directed Attention

    PubMed Central

    Talk, Andrew C.; Grasby, Katrina L.; Rawson, Tim; Ebejer, Jane L.

    2016-01-01

    Loss of function of the hippocampus or frontal cortex is associated with reduced performance on memory tasks, in which subjects are incidentally exposed to cues at specific places in the environment and are subsequently asked to recollect the location at which the cue was experienced. Here, we examined the roles of the rodent hippocampus and frontal cortex in cue-directed attention during encoding of memory for the location of a single incidentally experienced cue. During a spatial sensory preconditioning task, rats explored an elevated platform while an auditory cue was incidentally presented at one corner. The opposite corner acted as an unpaired control location. The rats demonstrated recollection of location by avoiding the paired corner after the auditory cue was in turn paired with shock. Damage to either the dorsal hippocampus or the frontal cortex impaired this memory ability. However, we also found that hippocampal lesions enhanced attention directed towards the cue during the encoding phase, while frontal cortical lesions reduced cue-directed attention. These results suggest that the deficit in spatial sensory preconditioning caused by frontal cortical damage may be mediated by inattention to the location of cues during the latent encoding phase, while deficits following hippocampal damage must be related to other mechanisms such as generation of neural plasticity. PMID:27999366

  19. Human sensitivity to differences in the rate of auditory cue change.

    PubMed

    Maloff, Erin S; Grantham, D Wesley; Ashmead, Daniel H

    2013-05-01

    Measurement of sensitivity to differences in the rate of change of auditory signal parameters is complicated by confounds among duration, extent, and velocity of the changing signal. Dooley and Moore [(1988) J. Acoust. Soc. Am. 84(4), 1332-1337] proposed a method for measuring sensitivity to rate of change using a duration discrimination task. They reported improved duration discrimination when an additional intensity or frequency change cue was present. The current experiments were an attempt to use this method to measure sensitivity to the rate of change in intensity and spatial position. Experiment 1 investigated whether duration discrimination was enhanced when additional cues of rate of intensity change, rate of spatial position change, or both were provided. Experiment 2 determined whether participant listening experience or the testing environment influenced duration discrimination task performance. Experiment 3 assessed whether duration discrimination could be used to measure sensitivity to rates of changes in intensity and spatial position for stimuli with lower rates of change, as well as emphasizing the constancy of the velocity cue. Results of these experiments showed that duration discrimination was impaired rather than enhanced by the additional velocity cues. The findings are discussed in terms of the demands of listening to concurrent changes along multiple auditory dimensions.

  20. SeaTouch: A Haptic and Auditory Maritime Environment for Non Visual Cognitive Mapping of Blind Sailors

    NASA Astrophysics Data System (ADS)

    Simonnet, Mathieu; Jacobson, Dan; Vieilledent, Stephane; Tisseau, Jacques

    Navigating consists of coordinating egocentric and allocentric spatial frames of reference. Virtual environments have afforded researchers in the spatial community with tools to investigate the learning of space. The issue of the transfer between virtual and real situations is not trivial. A central question is the role of frames of reference in mediating spatial knowledge transfer to external surroundings, as is the effect of different sensory modalities accessed in simulated and real worlds. This challenges the capacity of blind people to use virtual reality to explore a scene without graphics. The present experiment involves a haptic and auditory maritime virtual environment. In triangulation tasks, we measure systematic errors and preliminary results show an ability to learn configurational knowledge and to navigate through it without vision. Subjects appeared to take advantage of getting lost in an egocentric “haptic” view in the virtual environment to improve performances in the real environment.

  1. Sensory Entrainment Mechanisms in Auditory Perception: Neural Synchronization Cortico-Striatal Activation.

    PubMed

    Sameiro-Barbosa, Catia M; Geiser, Eveline

    2016-01-01

    The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system.

  2. Direction of Auditory Pitch-Change Influences Visual Search for Slope From Graphs.

    PubMed

    Parrott, Stacey; Guzman-Martinez, Emmanuel; Orte, Laura; Grabowecky, Marcia; Huntington, Mark D; Suzuki, Satoru

    2015-01-01

    Linear trend (slope) is important information conveyed by graphs. We investigated how sounds influenced slope detection in a visual search paradigm. Four bar graphs or scatter plots were presented on each trial. Participants looked for a positive-slope or a negative-slope target (in blocked trials), and responded to targets in a go or no-go fashion. For example, in a positive-slope-target block, the target graph displayed a positive slope while other graphs displayed negative slopes (a go trial), or all graphs displayed negative slopes (a no-go trial). When an ascending or descending sound was presented concurrently, ascending sounds slowed detection of negative-slope targets whereas descending sounds slowed detection of positive-slope targets. The sounds had no effect when they immediately preceded the visual search displays, suggesting that the results were due to crossmodal interaction rather than priming. The sounds also had no effect when targets were words describing slopes, such as "positive," "negative," "increasing," or "decreasing," suggesting that the results were unlikely due to semantic-level interactions. Manipulations of spatiotemporal similarity between sounds and graphs had little effect. These results suggest that ascending and descending sounds influence visual search for slope based on a general association between the direction of auditory pitch-change and visual linear trend.

  3. Temporary conductive hearing loss in early life impairs spatial memory of rats in adulthood.

    PubMed

    Zhao, Han; Wang, Li; Chen, Liang; Zhang, Jinsheng; Sun, Wei; Salvi, Richard J; Huang, Yi-Na; Wang, Ming; Chen, Lin

    2018-05-31

    It is known that an interruption of acoustic input in early life will result in abnormal development of the auditory system. Here, we further show that this negative impact actually spans beyond the auditory system to the hippocampus, a system critical for spatial memory. We induced a temporary conductive hearing loss (TCHL) in P14 rats by perforating the eardrum and allowing it to heal. The Morris water maze and Y-maze tests were deployed to evaluate spatial memory of the rats. Electrophysiological recordings and anatomical analysis were made to evaluate functional and structural changes in the hippocampus following TCHL. The rats with the TCHL had nearly normal hearing at P42, but had a decreased performance with the Morris water maze and Y-maze tests compared with the control group. A functional deficit in the hippocampus of the rats with the TCHL was found as revealed by the depressed long-term potentiation and the reduced NMDA receptor-mediated postsynaptic current. A structural deficit in the hippocampus of those animals was also found as revealed the abnormal expression of the NMDA receptors, the decreased number of dendritic spines, the reduced postsynaptic density and the reduced level of neurogenesis. Our study demonstrates that even temporary auditory sensory deprivation in early life of rats results in abnormal development of the hippocampus and consequently impairs spatial memory in adulthood. © 2018 The Authors. Brain and Behavior published by Wiley Periodicals, Inc.

  4. How does experience modulate auditory spatial processing in individuals with blindness?

    PubMed

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-jia; Li, Jian-jun; Ting, Kin-hung; Wang, Jun; Lee, Tatia M C

    2015-05-01

    Comparing early- and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early- and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel "Bat-ears" sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the late-onset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.

  5. Cortical activation patterns to spatially presented pure tone stimuli with different intensities measured by functional near-infrared spectroscopy.

    PubMed

    Bauernfeind, Günther; Wriessnegger, Selina C; Haumann, Sabine; Lenarz, Thomas

    2018-03-08

    Functional near-infrared spectroscopy (fNIRS) is an emerging technique for the assessment of functional activity of the cerebral cortex. Recently fNIRS was also envisaged as a novel neuroimaging approach for measuring the auditory cortex activity in the field of in auditory diagnostics. This study aimed to investigate differences in brain activity related to spatially presented sounds with different intensities in 10 subjects by means of functional near-infrared spectroscopy (fNIRS). We found pronounced cortical activation patterns in the temporal and frontal regions of both hemispheres. In contrast to these activation patterns, we found deactivation patterns in central and parietal regions of both hemispheres. Furthermore our results showed an influence of spatial presentation and intensity of the presented sounds on brain activity in related regions of interest. These findings are in line with previous fMRI studies which also reported systematic changes of activation in temporal and frontal areas with increasing sound intensity. Although clear evidence for contralaterality effects and hemispheric asymmetries were absent in the group data, these effects were partially visible on the single subject level. Concluding, fNIRS is sensitive enough to capture differences in brain responses during the spatial presentation of sounds with different intensities in several cortical regions. Our results may serve as a valuable contribution for further basic research and the future use of fNIRS in the area of central auditory diagnostics. © 2018 Wiley Periodicals, Inc.

  6. Fear Conditioning is Disrupted by Damage to the Postsubiculum

    PubMed Central

    Robinson, Siobhan; Bucci, David J.

    2011-01-01

    The hippocampus plays a central role in spatial and contextual learning and memory, however relatively little is known about the specific contributions of parahippocampal structures that interface with the hippocampus. The postsubiculum (PoSub) is reciprocally connected with a number of hippocampal, parahippocampal and subcortical structures that are involved in spatial learning and memory. In addition, behavioral data suggest that PoSub is needed for optimal performance during tests of spatial memory. Together, these data suggest that PoSub plays a prominent role in spatial navigation. Currently it is unknown whether the PoSub is needed for other forms of learning and memory that also require the formation of associations among multiple environmental stimuli. To address this gap in the literature we investigated the role of PoSub in Pavlovian fear conditioning. In Experiment 1 male rats received either lesions of PoSub or Sham surgery prior to training in a classical fear conditioning procedure. On the training day a tone was paired with foot shock three times. Conditioned fear to the training context was evaluated 24 hr later by placing rats back into the conditioning chamber without presenting any tones or shocks. Auditory fear was assessed on the third day by presenting the auditory stimulus in a novel environment (no shock). PoSub-lesioned rats exhibited impaired acquisition of the conditioned fear response as well as impaired expression of contextual and auditory fear conditioning. In Experiment 2, PoSub lesions were made 1 day after training to specifically assess the role of PoSub in fear memory. No deficits in the expression of contextual fear were observed, but freezing to the tone was significantly reduced in PoSub-lesioned rats compared to shams. Together, these results indicate that PoSub is necessary for normal acquisition of conditioned fear, and that PoSub contributes to the expression of auditory but not contextual fear memory. PMID:22076971

  7. Large-Scale Brain Networks Supporting Divided Attention across Spatial Locations and Sensory Modalities

    PubMed Central

    Santangelo, Valerio

    2018-01-01

    Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010) to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory) in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory) in one spatial location. The analysis of the independent components (ICs) revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS) and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF) and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC). The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among brain networks implicated during divided attention across spatial locations and sensory modalities, pointing out the importance of investigating effective connectivity of large-scale brain networks supporting complex behavior. PMID:29535614

  8. Large-Scale Brain Networks Supporting Divided Attention across Spatial Locations and Sensory Modalities.

    PubMed

    Santangelo, Valerio

    2018-01-01

    Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010) to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory) in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory) in one spatial location. The analysis of the independent components (ICs) revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS) and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF) and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC). The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among brain networks implicated during divided attention across spatial locations and sensory modalities, pointing out the importance of investigating effective connectivity of large-scale brain networks supporting complex behavior.

  9. Composing alarms: considering the musical aspects of auditory alarm design.

    PubMed

    Gillard, Jessica; Schutz, Michael

    2016-12-01

    Short melodies are commonly linked to referents in jingles, ringtones, movie themes, and even auditory displays (i.e., sounds used in human-computer interactions). While melody associations can be quite effective, auditory alarms in medical devices are generally poorly learned and highly confused. Here, we draw on approaches and stimuli from both music cognition (melody recognition) and human factors (alarm design) to analyze the patterns of confusions in a paired-associate alarm-learning task involving both a standardized melodic alarm set (Experiment 1) and a set of novel melodies (Experiment 2). Although contour played a role in confusions (consistent with previous research), we observed several cases where melodies with similar contours were rarely confused - melodies holding musically distinctive features. This exploratory work suggests that salient features formed by an alarm's melodic structure (such as repeated notes, distinct contours, and easily recognizable intervals) can increase the likelihood of correct alarm identification. We conclude that the use of musical principles and features may help future efforts to improve the design of auditory alarms.

  10. Visual and auditory accessory stimulus offset and the Simon effect.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2010-10-01

    We investigated the effect on the right and left responses of the disappearance of a task-irrelevant stimulus located on the right or left side. Participants pressed a right or left response key on the basis of the color of a centrally located visual target. Visual (Experiment 1) or auditory (Experiment 2) task-irrelevant accessory stimuli appeared or disappeared at locations to the right or left of the central target. In Experiment 1, responses were faster when onset or offset of the visual accessory stimulus was spatially congruent with the response. In Experiment 2, responses were again faster when onset of the auditory accessory stimulus and the response were on the same side. However, responses were slightly slower when offset of the auditory accessory stimulus and the response were on the same side than when they were on opposite sides. These findings indicate that transient change information is crucial for a visual Simon effect, whereas sustained stimulation from an ongoing stimulus also contributes to an auditory Simon effect.

  11. Relationships Among Peripheral and Central Electrophysiological Measures of Spatial and Spectral Selectivity and Speech Perception in Cochlear Implant Users.

    PubMed

    Scheperle, Rachel A; Abbas, Paul J

    2015-01-01

    The ability to perceive speech is related to the listener's ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel discrimination and the Bamford-Kowal-Bench Speech-in-Noise test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. All electrophysiological measures were significantly correlated with each other and with speech scores for the mixed-model analysis, which takes into account multiple measures per person (i.e., experimental MAPs). The ECAP measures were the best predictor. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech scores; spectral auditory change complex amplitude was the strongest predictor. The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be most useful for within-subject applications when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on a single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered.

  12. Fast transfer of crossmodal time interval training.

    PubMed

    Chen, Lihan; Zhou, Xiaolin

    2014-06-01

    Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.

  13. Tinnitus. I: Auditory mechanisms: a model for tinnitus and hearing impairment.

    PubMed

    Hazell, J W; Jastreboff, P J

    1990-02-01

    A model is proposed for tinnitus and sensorineural hearing loss involving cochlear pathology. As tinnitus is defined as a cortical perception of sound in the absence of an appropriate external stimulus it must result from a generator in the auditory system which undergoes extensive auditory processing before it is perceived. The concept of spatial nonlinearity in the cochlea is presented as a cause of tinnitus generation controlled by the efferents. Various clinical presentations of tinnitus and the way in which they respond to changes in the environment are discussed with respect to this control mechanism. The concept of auditory retraining as part of the habituation process, and interaction with the prefrontal cortex and limbic system is presented as a central model which emphasizes the importance of the emotional significance and meaning of tinnitus.

  14. Auditory observation of stepping actions can cue both spatial and temporal components of gait in Parkinson׳s disease patients.

    PubMed

    Young, William R; Rodger, Matthew W M; Craig, Cathy M

    2014-05-01

    A common behavioural symptom of Parkinson׳s disease (PD) is reduced step length (SL). Whilst sensory cueing strategies can be effective in increasing SL and reducing gait variability, current cueing strategies conveying spatial or temporal information are generally confined to the use of either visual or auditory cue modalities, respectively. We describe a novel cueing strategy using ecologically-valid 'action-related' sounds (footsteps on gravel) that convey both spatial and temporal parameters of a specific action within a single cue. The current study used a real-time imitation task to examine whether PD affects the ability to re-enact changes in spatial characteristics of stepping actions, based solely on auditory information. In a second experimental session, these procedures were repeated using synthesized sounds derived from recordings of the kinetic interactions between the foot and walking surface. A third experimental session examined whether adaptations observed when participants walked to action-sounds were preserved when participants imagined either real recorded or synthesized sounds. Whilst healthy control participants were able to re-enact significant changes in SL in all cue conditions, these adaptations, in conjunction with reduced variability of SL were only observed in the PD group when walking to, or imagining the recorded sounds. The findings show that while recordings of stepping sounds convey action information to allow PD patients to re-enact and imagine spatial characteristics of gait, synthesis of sounds purely from gait kinetics is insufficient to evoke similar changes in behaviour, perhaps indicating that PD patients have a higher threshold to cue sensorimotor resonant responses. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Lateralization of Frequency-Specific Networks for Covert Spatial Attention to Auditory Stimuli

    PubMed Central

    Thorpe, Samuel; D'Zmura, Michael

    2011-01-01

    We conducted a cued spatial attention experiment to investigate the time–frequency structure of human EEG induced by attentional orientation of an observer in external auditory space. Seven subjects participated in a task in which attention was cued to one of two spatial locations at left and right. Subjects were instructed to report the speech stimulus at the cued location and to ignore a simultaneous speech stream originating from the uncued location. EEG was recorded from the onset of the directional cue through the offset of the inter-stimulus interval (ISI), during which attention was directed toward the cued location. Using a wavelet spectrum, each frequency band was then normalized by the mean level of power observed in the early part of the cue interval to obtain a measure of induced power related to the deployment of attention. Topographies of band specific induced power during the cue and inter-stimulus intervals showed peaks over symmetric bilateral scalp areas. We used a bootstrap analysis of a lateralization measure defined for symmetric groups of channels in each band to identify specific lateralization events throughout the ISI. Our results suggest that the deployment and maintenance of spatially oriented attention throughout a period of 1,100 ms is marked by distinct episodes of reliable hemispheric lateralization ipsilateral to the direction in which attention is oriented. An early theta lateralization was evident over posterior parietal electrodes and was sustained throughout the ISI. In the alpha and mu bands punctuated episodes of parietal power lateralization were observed roughly 500 ms after attentional deployment, consistent with previous studies of visual attention. In the beta band these episodes show similar patterns of lateralization over frontal motor areas. These results indicate that spatial attention involves similar mechanisms in the auditory and visual modalities. PMID:21630112

  16. Method and apparatus for an optical function generator for seamless tiled displays

    NASA Technical Reports Server (NTRS)

    Johnson, Michael (Inventor); Chen, Chung-Jen (Inventor)

    2004-01-01

    Producing seamless tiled images from multiple displays includes measuring a luminance profile of each of the displays, computing a desired luminance profile for each of the displays, and determining a spatial gradient profile of each of the displays based on the measured luminance profile and the computed desired luminance profile. The determined spatial gradient profile is applied to a spatial filter to be inserted into each of the displays to produce the seamless tiled display image.

  17. Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment.

    PubMed

    Bernasconi, Fosco; Grivel, Jeremy; Murray, Micah M; Spierer, Lucas

    2010-07-01

    Accurate perception of the temporal order of sensory events is a prerequisite in numerous functions ranging from language comprehension to motor coordination. We investigated the spatio-temporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional decoupling between homotopic PSR areas. These results support a model of temporal order processing wherein behaviorally relevant temporal information--i.e. a temporal 'stamp'--is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  18. Auditory decision aiding in supervisory control of multiple unmanned aerial vehicles.

    PubMed

    Donmez, Birsen; Cummings, M L; Graham, Hudson D

    2009-10-01

    This article is an investigation of the effectiveness of sonifications, which are continuous auditory alerts mapped to the state of a monitored task, in supporting unmanned aerial vehicle (UAV) supervisory control. UAV supervisory control requires monitoring a UAV across multiple tasks (e.g., course maintenance) via a predominantly visual display, which currently is supported with discrete auditory alerts. Sonification has been shown to enhance monitoring performance in domains such as anesthesiology by allowing an operator to immediately determine an entity's (e.g., patient) current and projected states, and is a promising alternative to discrete alerts in UAV control. However, minimal research compares sonification to discrete alerts, and no research assesses the effectiveness of sonification for monitoring multiple entities (e.g., multiple UAVs). The authors conducted an experiment with 39 military personnel, using a simulated setup. Participants controlled single and multiple UAVs and received sonifications or discrete alerts based on UAV course deviations and late target arrivals. Regardless of the number of UAVs supervised, the course deviation sonification resulted in reactions to course deviations that were 1.9 s faster, a 19% enhancement, compared with discrete alerts. However, course deviation sonifications interfered with the effectiveness of discrete late arrival alerts in general and with operator responses to late arrivals when supervising multiple vehicles. Sonifications can outperform discrete alerts when designed to aid operators to predict future states of monitored tasks. However, sonifications may mask other auditory alerts and interfere with other monitoring tasks that require divided attention. This research has implications for supervisory control display design.

  19. The neuroergonomic evaluation of human machine interface design in air traffic control using behavioral and EGG/ERP measures.

    PubMed

    Giraudet, L; Imbert, J-P; Bérenger, M; Tremblay, S; Causse, M

    2015-11-01

    The Air Traffic Control (ATC) environment is complex and safety-critical. Whilst exchanging information with pilots, controllers must also be alert to visual notifications displayed on the radar screen (e.g., warning which indicates a loss of minimum separation between aircraft). Under the assumption that attentional resources are shared between vision and hearing, the visual interface design may also impact the ability to process these auditory stimuli. Using a simulated ATC task, we compared the behavioral and neural responses to two different visual notification designs--the operational alarm that involves blinking colored "ALRT" displayed around the label of the notified plane ("Color-Blink"), and the more salient alarm involving the same blinking text plus four moving yellow chevrons ("Box-Animation"). Participants performed a concurrent auditory task with the requirement to react to rare pitch tones. P300 from the occurrence of the tones was taken as an indicator of remaining attentional resources. Participants who were presented with the more salient visual design showed better accuracy than the group with the suboptimal operational design. On a physiological level, auditory P300 amplitude in the former group was greater than that observed in the latter group. One potential explanation is that the enhanced visual design freed up attentional resources which, in turn, improved the cerebral processing of the auditory stimuli. These results suggest that P300 amplitude can be used as a valid estimation of the efficiency of interface designs, and of cognitive load more generally. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Probing the limits of alpha power lateralisation as a neural marker of selective attention in middle-aged and older listeners.

    PubMed

    Tune, Sarah; Wöstmann, Malte; Obleser, Jonas

    2018-02-11

    In recent years, hemispheric lateralisation of alpha power has emerged as a neural mechanism thought to underpin spatial attention across sensory modalities. Yet, how healthy ageing, beginning in middle adulthood, impacts the modulation of lateralised alpha power supporting auditory attention remains poorly understood. In the current electroencephalography study, middle-aged and older adults (N = 29; ~40-70 years) performed a dichotic listening task that simulates a challenging, multitalker scenario. We examined the extent to which the modulation of 8-12 Hz alpha power would serve as neural marker of listening success across age. With respect to the increase in interindividual variability with age, we examined an extensive battery of behavioural, perceptual and neural measures. Similar to findings on younger adults, middle-aged and older listeners' auditory spatial attention induced robust lateralisation of alpha power, which synchronised with the speech rate. Notably, the observed relationship between this alpha lateralisation and task performance did not co-vary with age. Instead, task performance was strongly related to an individual's attentional and working memory capacity. Multivariate analyses revealed a separation of neural and behavioural variables independent of age. Our results suggest that in age-varying samples as the present one, the lateralisation of alpha power is neither a sufficient nor necessary neural strategy for an individual's auditory spatial attention, as higher age might come with increased use of alternative, compensatory mechanisms. Our findings emphasise that explaining interindividual variability will be key to understanding the role of alpha oscillations in auditory attention in the ageing listener. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  1. Visual motion disambiguation by a subliminal sound.

    PubMed

    Dufour, Andre; Touzalin, Pascale; Moessinger, Michèle; Brochard, Renaud; Després, Olivier

    2008-09-01

    There is growing interest in the effect of sound on visual motion perception. One model involves the illusion created when two identical objects moving towards each other on a two-dimensional visual display can be seen to either bounce off or stream through each other. Previous studies show that the large bias normally seen toward the streaming percept can be modulated by the presentation of an auditory event at the moment of coincidence. However, no reports to date provide sufficient evidence to indicate whether the sound bounce-inducing effect is due to a perceptual binding process or merely to an explicit inference resulting from the transient auditory stimulus resembling a physical collision of two objects. In the present study, we used a novel experimental design in which a subliminal sound was presented either 150 ms before, at, or 150 ms after the moment of coincidence of two disks moving towards each other. The results showed that there was an increased perception of bouncing (rather than streaming) when the subliminal sound was presented at or 150 ms after the moment of coincidence compared to when no sound was presented. These findings provide the first empirical demonstration that activation of the human auditory system without reaching consciousness affects the perception of an ambiguous visual motion display.

  2. Age-related decline in verbal learning is moderated by demographic factors, working memory capacity, and presence of amnestic mild cognitive impairment.

    PubMed

    Constantinidou, Fofi; Zaganas, Ioannis; Papastefanakis, Emmanouil; Kasselimis, Dimitrios; Nidos, Andreas; Simos, Panagiotis G

    2014-09-01

    Age-related memory changes are highly varied and heterogeneous. The study examined the rate of decline in verbal episodic memory as a function of education level, auditory attention span and verbal working memory capacity, and diagnosis of amnestic mild cognitive impairment (a-MCI). Data were available on a community sample of 653 adults aged 17-86 years and 70 patients with a-MCI recruited from eight broad geographic areas in Greece and Cyprus. Measures of auditory attention span and working memory capacity (digits forward and backward) and verbal episodic memory (Auditory Verbal Learning Test [AVLT]) were used. Moderated mediation regressions on data from the community sample did not reveal significant effects of education level on the rate of age-related decline in AVLT indices. The presence of a-MCI was a significant moderator of the direct effect of Age on both immediate and delayed episodic memory indices. The rate of age-related decline in verbal episodic memory is normally mediated by working memory capacity. Moreover, in persons who display poor episodic memory capacity (a-MCI group), age-related memory decline is expected to advance more rapidly for those who also display relatively poor verbal working memory capacity.

  3. Auditory and Cognitive Factors Associated with Speech-in-Noise Complaints following Mild Traumatic Brain Injury.

    PubMed

    Hoover, Eric C; Souza, Pamela E; Gallun, Frederick J

    2017-04-01

    Auditory complaints following mild traumatic brain injury (MTBI) are common, but few studies have addressed the role of auditory temporal processing in speech recognition complaints. In this study, deficits understanding speech in a background of speech noise following MTBI were evaluated with the goal of comparing the relative contributions of auditory and nonauditory factors. A matched-groups design was used in which a group of listeners with a history of MTBI were compared to a group matched in age and pure-tone thresholds, as well as a control group of young listeners with normal hearing (YNH). Of the 33 listeners who participated in the study, 13 were included in the MTBI group (mean age = 46.7 yr), 11 in the Matched group (mean age = 49 yr), and 9 in the YNH group (mean age = 20.8 yr). Speech-in-noise deficits were evaluated using subjective measures as well as monaural word (Words-in-Noise test) and sentence (Quick Speech-in-Noise test) tasks, and a binaural spatial release task. Performance on these measures was compared to psychophysical tasks that evaluate monaural and binaural temporal fine-structure tasks and spectral resolution. Cognitive measures of attention, processing speed, and working memory were evaluated as possible causes of differences between MTBI and Matched groups that might contribute to speech-in-noise perception deficits. A high proportion of listeners in the MTBI group reported difficulty understanding speech in noise (84%) compared to the Matched group (9.1%), and listeners who reported difficulty were more likely to have abnormal results on objective measures of speech in noise. No significant group differences were found between the MTBI and Matched listeners on any of the measures reported, but the number of abnormal tests differed across groups. Regression analysis revealed that a combination of auditory and auditory processing factors contributed to monaural speech-in-noise scores, but the benefit of spatial separation was related to a combination of working memory and peripheral auditory factors across all listeners in the study. The results of this study are consistent with previous findings that a subset of listeners with MTBI has objective auditory deficits. Speech-in-noise performance was related to a combination of auditory and nonauditory factors, confirming the important role of audiology in MTBI rehabilitation. Further research is needed to evaluate the prevalence and causal relationship of auditory deficits following MTBI. American Academy of Audiology

  4. Implications of differences of echoic and iconic memory for the design of multimodal displays

    NASA Astrophysics Data System (ADS)

    Glaser, Daniel Shields

    It has been well documented that dual-task performance is more accurate when each task is based on a different sensory modality. It is also well documented that the memory for each sense has unequal durations, particularly visual (iconic) and auditory (echoic) sensory memory. In this dissertation I address whether differences in sensory memory (e.g. iconic vs. echoic) duration have implications for the design of a multimodal display. Since echoic memory persists for seconds in contrast to iconic memory which persists only for milliseconds, one of my hypotheses was that in a visual-auditory dual task condition, performance will be better if the visual task is completed before the auditory task than vice versa. In Experiment 1 I investigated whether the ability to recall multi-modal stimuli is affected by recall order, with each mode being responded to separately. In Experiment 2, I investigated the effects of stimulus order and recall order on the ability to recall information from a multi-modal presentation. In Experiment 3 I investigated the effect of presentation order using a more realistic task. In Experiment 4 I investigated whether manipulating the presentation order of stimuli of different modalities improves humans' ability to combine the information from the two modalities in order to make decision based on pre-learned rules. As hypothesized, accuracy was greater when visual stimuli were responded to first and auditory stimuli second. Also as hypothesized, performance was improved by not presenting both sequences at the same time, limiting the perceptual load. Contrary to my expectations, overall performance was better when a visual sequence was presented before the audio sequence. Though presenting a visual sequence prior to an auditory sequence lengthens the visual retention interval, it also provides time for visual information to be recoded to a more robust form without disruption. Experiment 4 demonstrated that decision making requiring the integration of visual and auditory information is enhanced by reducing workload and promoting a strategic use of echoic memory. A framework for predicting Experiment 1-4 results is proposed and evaluated.

  5. Air pollution is associated with brainstem auditory nuclei pathology and delayed brainstem auditory evoked potentials.

    PubMed

    Calderón-Garcidueñas, Lilian; D'Angiulli, Amedeo; Kulesza, Randy J; Torres-Jardón, Ricardo; Osnaya, Norma; Romero, Lina; Keefe, Sheyla; Herritt, Lou; Brooks, Diane M; Avila-Ramirez, Jose; Delgado-Chávez, Ricardo; Medina-Cortina, Humberto; González-González, Luis Oscar

    2011-06-01

    We assessed brainstem inflammation in children exposed to air pollutants by comparing brainstem auditory evoked potentials (BAEPs) and blood inflammatory markers in children age 96.3±8.5 months from highly polluted (n=34) versus a low polluted city (n=17). The brainstems of nine children with accidental deaths were also examined. Children from the highly polluted environment had significant delays in wave III (t(50)=17.038; p<0.0001) and wave V (t(50)=19.730; p<0.0001) but no delay in wave I (p=0.548). They also had significantly longer latencies than controls for interwave intervals I-III, III-V, and I-V (all t(50)>7.501; p<0.0001), consisting with delayed central conduction time of brainstem neural transmission. Highly exposed children showed significant evidence of inflammatory markers and their auditory and vestibular nuclei accumulated α synuclein and/or β amyloid(1-42). Medial superior olive neurons, critically involved in BAEPs, displayed significant pathology. Children's exposure to urban air pollution increases their risk for auditory and vestibular impairment. Copyright © 2011 ISDN. Published by Elsevier Ltd. All rights reserved.

  6. Spiral Ganglion Neuron Projection Development to the Hindbrain in Mice Lacking Peripheral and/or Central Target Differentiation

    PubMed Central

    Elliott, Karen L.; Kersigo, Jennifer; Pan, Ning; Jahan, Israt; Fritzsch, Bernd

    2017-01-01

    We investigate the importance of the degree of peripheral or central target differentiation for mouse auditory afferent navigation to the organ of Corti and auditory nuclei in three different mouse models: first, a mouse in which the differentiation of hair cells, but not central auditory nuclei neurons is compromised (Atoh1-cre; Atoh1f/f); second, a mouse in which hair cell defects are combined with a delayed defect in central auditory nuclei neurons (Pax2-cre; Atoh1f/f), and third, a mouse in which both hair cells and central auditory nuclei are absent (Atoh1−/−). Our results show that neither differentiated peripheral nor the central target cells of inner ear afferents are needed (hair cells, cochlear nucleus neurons) for segregation of vestibular and cochlear afferents within the hindbrain and some degree of base to apex segregation of cochlear afferents. These data suggest that inner ear spiral ganglion neuron processes may predominantly rely on temporally and spatially distinct molecular cues in the region of the targets rather than interaction with differentiated target cells for a crude topological organization. These developmental data imply that auditory neuron navigation properties may have evolved before auditory nuclei. PMID:28450830

  7. Listening to music primes space: pianists, but not novices, simulate heard actions.

    PubMed

    Taylor, J Eric T; Witt, Jessica K

    2015-03-01

    Musicians sometimes report twitching in their fingers or hands while listening to music. This anecdote could be indicative of a tendency for auditory-motor co-representation in musicians. Here, we describe two studies showing that pianists (Experiment 1), but not novices (Experiment 2) automatically generate spatial representations that correspond to learned musical actions while listening to music. Participants made one-handed movements to the left or right from a central location in response to visual stimuli while listening to task-irrelevant auditory stimuli, which were scales played on a piano. These task-irrelevant scales were either ascending (compatible with rightward movements) or descending (compatible with leftward movements). Pianists were faster to respond when the scale direction was compatible with the direction of response movement, whereas novices' movements were unaffected by the scale. These results are in agreement with existing research on action-effect coupling in musicians, which draw heavily on common coding theory. In addition, these results show how intricate auditory stimuli (ascending or descending scales) evoke coarse, domain-general spatial representations.

  8. Estrogen and hearing from a clinical point of view; characteristics of auditory function in women with Turner syndrome.

    PubMed

    Hederstierna, Christina; Hultcrantz, Malou; Rosenhall, Ulf

    2009-06-01

    Turner syndrome is a chromosomal aberration affecting 1:2000 newborn girls, in which all or part of one X chromosome is absent. This leads to ovarial dysgenesis and little or no endogenous estrogen production. These women have, among many other syndromal features, a high occurrence of ear and hearing problems, and neurocognitive dysfunctions, including reduced visual-spatial abilities; it is assumed that estrogen deficiency is at least partially responsible for these problems. In this, study 30 Turner women aged 40-67, with mild to moderate hearing loss, performed a battery of hearing tests aimed at localizing the lesion causing the sensorineural hearing impairment and assessing central auditory function, primarily sound localization. The results of TEOAE, ABR and speech recognition scores in noise were all indicative of cochlear dysfunction as the cause of the sensorineural impairment. Phase audiometry, a test for sound localization, showed mild disturbances in the Turner women compared to the reference group, suggesting that auditory-spatial dysfunction is another facet of the recognized neurocognitive phenotype in Turner women.

  9. Temporal ventriloquism: crossmodal interaction on the time dimension. 1. Evidence from auditory-visual temporal order judgment.

    PubMed

    Bertelson, Paul; Aschersleben, Gisa

    2003-10-01

    In the well-known visual bias of auditory location (alias the ventriloquist effect), auditory and visual events presented in separate locations appear closer together, provided the presentations are synchronized. Here, we consider the possibility of the converse phenomenon: crossmodal attraction on the time dimension conditional on spatial proximity. Participants judged the order of occurrence of sound bursts and light flashes, respectively, separated in time by varying stimulus onset asynchronies (SOAs) and delivered either in the same or in different locations. Presentation was organized using randomly mixed psychophysical staircases, by which the SOA was reduced progressively until a point of uncertainty was reached. This point was reached at longer SOAs with the sounds in the same frontal location as the flashes than in different places, showing that apparent temporal separation is effectively longer in the first condition. Together with a similar one obtained recently in a case of tactile-visual discrepancy, this result supports a view in which timing and spatial layout of the inputs play to some extent inter-changeable roles in the pairing operation at the base of crossmodal interaction.

  10. Auditory attention strategy depends on target linguistic properties and spatial configurationa)

    PubMed Central

    McCloy, Daniel R.; Lee, Adrian K. C.

    2015-01-01

    Whether crossing a busy intersection or attending a large dinner party, listeners sometimes need to attend to multiple spatially distributed sound sources or streams concurrently. How they achieve this is not clear—some studies suggest that listeners cannot truly simultaneously attend to separate streams, but instead combine attention switching with short-term memory to achieve something resembling divided attention. This paper presents two oddball detection experiments designed to investigate whether directing attention to phonetic versus semantic properties of the attended speech impacts listeners' ability to divide their auditory attention across spatial locations. Each experiment uses four spatially distinct streams of monosyllabic words, variation in cue type (providing phonetic or semantic information), and requiring attention to one or two locations. A rapid button-press response paradigm is employed to minimize the role of short-term memory in performing the task. Results show that differences in the spatial configuration of attended and unattended streams interact with linguistic properties of the speech streams to impact performance. Additionally, listeners may leverage phonetic information to make oddball detection judgments even when oddballs are semantically defined. Both of these effects appear to be mediated by the overall complexity of the acoustic scene. PMID:26233011

  11. Distribution of Auditory Response Behaviors in Normal Infants and Profoundly Multihandicapped Children.

    ERIC Educational Resources Information Center

    Flexer, Carol; Gans, Donald P.

    1986-01-01

    A study compared the responsiveness to sound by normal infants and profoundly multihandicapped children. Results revealed that the profoundly multihandicapped subjects displayed relatively more reflexive than attentive type behaviors and exhibited fewer behaviors per response. (Author/CB)

  12. Multisensory Cues Capture Spatial Attention Regardless of Perceptual Load

    ERIC Educational Resources Information Center

    Santangelo, Valerio; Spence, Charles

    2007-01-01

    We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in…

  13. Long-Term Memory Biases Auditory Spatial Attention

    ERIC Educational Resources Information Center

    Zimmermann, Jacqueline F.; Moscovitch, Morris; Alain, Claude

    2017-01-01

    Long-term memory (LTM) has been shown to bias attention to a previously learned visual target location. Here, we examined whether memory-predicted spatial location can facilitate the detection of a faint pure tone target embedded in real world audio clips (e.g., soundtrack of a restaurant). During an initial familiarization task, participants…

  14. Distributed Processing and Cortical Specialization for Speech and Environmental Sounds in Human Temporal Cortex

    ERIC Educational Resources Information Center

    Leech, Robert; Saygin, Ayse Pinar

    2011-01-01

    Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial…

  15. Influence of visual and auditory biofeedback on partial body weight support treadmill training of individuals with chronic hemiparesis: a randomized controlled clinical trial.

    PubMed

    Brasileiro, A; Gama, G; Trigueiro, L; Ribeiro, T; Silva, E; Galvão, É; Lindquist, A

    2015-02-01

    Stroke is an important causal factor of deficiency and functional dependence worldwide. To determine the immediate effects of visual and auditory biofeedback, combined with partial body weight supported (PBWS) treadmill training on the gait of individuals with chronic hemiparesis. Randomized controlled trial. Outpatient rehabilitation hospital. Thirty subjects with chronic hemiparesis and ability to walk with some help. Participants were randomized to a control group that underwent only PBWS treadmill training; or experimental I group with visual biofeedback from the display monitor, in the form of symbolic feet as the subject took a step; or experimental group II with auditory biofeedback associated display, using a metronome at 115% of the individual's preferred cadence. They trained for 20 minutes and were evaluated before and after training. Spatio-temporal and angular gait variables were obtained by kinematics from the Qualisys Motion Analysis system. Increases in speed and stride length were observed for all groups over time (speed: F=25.63; P<0.001; stride length: F=27.18; P<0.001), as well as changes in hip and ankle range of motion - ROM (hip ROM: F=14.43; P=0.001; ankle ROM: F=4.76; P=0.038), with no time*groups interaction. Other spatio-temporal and angular parameters remain unchanged. Visual biofeedback and auditory biofeedback had no influence on PBWS treadmill training of individuals with chronic hemiparesis, in short term. Additional studies are needed to determine whether, in long term, the biofeedback will promote additional benefit to the PBWS treadmill training. The findings of this study indicate that visual and auditory biofeedback does not bring immediate benefits on PBWS treadmill training of individuals with chronic hemiparesis. This suggest that, for additional benefits are achieved with biofeedback, effects should be investigated after long-term training, which may determine if some kind of biofeedback is superior to another to improve the hemiparetic gait.

  16. Seeing Circles and Drawing Ellipses: When Sound Biases Reproduction of Visual Motion

    PubMed Central

    Aramaki, Mitsuko; Bringoux, Lionel; Ystad, Sølvi; Kronland-Martinet, Richard

    2016-01-01

    The perception and production of biological movements is characterized by the 1/3 power law, a relation linking the curvature and the velocity of an intended action. In particular, motions are perceived and reproduced distorted when their kinematics deviate from this biological law. Whereas most studies dealing with this perceptual-motor relation focused on visual or kinaesthetic modalities in a unimodal context, in this paper we show that auditory dynamics strikingly biases visuomotor processes. Biologically consistent or inconsistent circular visual motions were used in combination with circular or elliptical auditory motions. Auditory motions were synthesized friction sounds mimicking those produced by the friction of the pen on a paper when someone is drawing. Sounds were presented diotically and the auditory motion velocity was evoked through the friction sound timbre variations without any spatial cues. Remarkably, when subjects were asked to reproduce circular visual motion while listening to sounds that evoked elliptical kinematics without seeing their hand, they drew elliptical shapes. Moreover, distortion induced by inconsistent elliptical kinematics in both visual and auditory modalities added up linearly. These results bring to light the substantial role of auditory dynamics in the visuo-motor coupling in a multisensory context. PMID:27119411

  17. Comparison between treadmill training with rhythmic auditory stimulation and ground walking with rhythmic auditory stimulation on gait ability in chronic stroke patients: A pilot study.

    PubMed

    Park, Jin; Park, So-yeon; Kim, Yong-wook; Woo, Youngkeun

    2015-01-01

    Generally, treadmill training is very effective intervention, and rhythmic auditory stimulation is designed to feedback during gait training in stroke patients. The purpose of this study was to compare the gait abilities in chronic stroke patients following either treadmill walking training with rhythmic auditory stimulation (TRAS) or over ground walking training with rhythmic auditory stimulation (ORAS). Nineteen subjects were divided into two groups: a TRAS group (9 subjects) and an ORAS group (10 subjects). Temporal and spatial gait parameters and motor recovery ability were measured before and after the training period. Gait ability was measured by the Biodex Gait trainer treadmill system, Timed up and go test (TUG), 6 meter walking distance (6MWD) and Functional gait assessment (FGA). After the training periods, the TRAS group showed a significant improvement in walking speed, step cycle, step length of the unaffected limb, coefficient of variation, 6MWD, and, FGA when compared to the ORAS group (p <  0.05). Treadmill walking training during the rhythmic auditory stimulation may be useful for rehabilitation of patients with chronic stroke.

  18. Voluntary movement affects simultaneous perception of auditory and tactile stimuli presented to a non-moving body part.

    PubMed

    Hao, Qiao; Ora, Hiroki; Ogawa, Ken-Ichiro; Ogata, Taiki; Miyake, Yoshihiro

    2016-09-13

    The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part.

  19. Spontaneous high-gamma band activity reflects functional organization of auditory cortex in the awake macaque.

    PubMed

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2012-06-07

    In the absence of sensory stimuli, spontaneous activity in the brain has been shown to exhibit organization at multiple spatiotemporal scales. In the macaque auditory cortex, responses to acoustic stimuli are tonotopically organized within multiple, adjacent frequency maps aligned in a caudorostral direction on the supratemporal plane (STP) of the lateral sulcus. Here, we used chronic microelectrocorticography to investigate the correspondence between sensory maps and spontaneous neural fluctuations in the auditory cortex. We first mapped tonotopic organization across 96 electrodes spanning approximately two centimeters along the primary and higher auditory cortex. In separate sessions, we then observed that spontaneous activity at the same sites exhibited spatial covariation that reflected the tonotopic map of the STP. This observation demonstrates a close relationship between functional organization and spontaneous neural activity in the sensory cortex of the awake monkey. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Spontaneous high-gamma band activity reflects functional organization of auditory cortex in the awake macaque

    PubMed Central

    Fukushima, Makoto; Saunders, Richard C.; Leopold, David A.; Mishkin, Mortimer; Averbeck, Bruno B.

    2012-01-01

    Summary In the absence of sensory stimuli, spontaneous activity in the brain has been shown to exhibit organization at multiple spatiotemporal scales. In the macaque auditory cortex, responses to acoustic stimuli are tonotopically organized within multiple, adjacent frequency maps aligned in a caudorostral direction on the supratemporal plane (STP) of the lateral sulcus. Here we used chronic micro-electrocorticography to investigate the correspondence between sensory maps and spontaneous neural fluctuations in the auditory cortex. We first mapped tonotopic organization across 96 electrodes spanning approximately two centimeters along the primary and higher auditory cortex. In separate sessions we then observed that spontaneous activity at the same sites exhibited spatial covariation that reflected the tonotopic map of the STP. This observation demonstrates a close relationship between functional organization and spontaneous neural activity in the sensory cortex of the awake monkey. PMID:22681693

  1. Transformation of temporal sequences in the zebra finch auditory system

    PubMed Central

    Lim, Yoonseob; Lagoy, Ryan; Shinn-Cunningham, Barbara G; Gardner, Timothy J

    2016-01-01

    This study examines how temporally patterned stimuli are transformed as they propagate from primary to secondary zones in the thalamorecipient auditory pallium in zebra finches. Using a new class of synthetic click stimuli, we find a robust mapping from temporal sequences in the primary zone to distinct population vectors in secondary auditory areas. We tested whether songbirds could discriminate synthetic click sequences in an operant setup and found that a robust behavioral discrimination is present for click sequences composed of intervals ranging from 11 ms to 40 ms, but breaks down for stimuli composed of longer inter-click intervals. This work suggests that the analog of the songbird auditory cortex transforms temporal patterns to sequence-selective population responses or ‘spatial codes', and that these distinct population responses contribute to behavioral discrimination of temporally complex sounds. DOI: http://dx.doi.org/10.7554/eLife.18205.001 PMID:27897971

  2. A physiologically-inspired model reproducing the speech intelligibility benefit in cochlear implant listeners with residual acoustic hearing.

    PubMed

    Zamaninezhad, Ladan; Hohmann, Volker; Büchner, Andreas; Schädler, Marc René; Jürgens, Tim

    2017-02-01

    This study introduces a speech intelligibility model for cochlear implant users with ipsilateral preserved acoustic hearing that aims at simulating the observed speech-in-noise intelligibility benefit when receiving simultaneous electric and acoustic stimulation (EA-benefit). The model simulates the auditory nerve spiking in response to electric and/or acoustic stimulation. The temporally and spatially integrated spiking patterns were used as the final internal representation of noisy speech. Speech reception thresholds (SRTs) in stationary noise were predicted for a sentence test using an automatic speech recognition framework. The model was employed to systematically investigate the effect of three physiologically relevant model factors on simulated SRTs: (1) the spatial spread of the electric field which co-varies with the number of electrically stimulated auditory nerves, (2) the "internal" noise simulating the deprivation of auditory system, and (3) the upper bound frequency limit of acoustic hearing. The model results show that the simulated SRTs increase monotonically with increasing spatial spread for fixed internal noise, and also increase with increasing the internal noise strength for a fixed spatial spread. The predicted EA-benefit does not follow such a systematic trend and depends on the specific combination of the model parameters. Beyond 300 Hz, the upper bound limit for preserved acoustic hearing is less influential on speech intelligibility of EA-listeners in stationary noise. The proposed model-predicted EA-benefits are within the range of EA-benefits shown by 18 out of 21 actual cochlear implant listeners with preserved acoustic hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Auditory preference of children with autism spectrum disorders.

    PubMed

    Gilbertson, Lynn R; Lutfi, Robert A; Ellis Weismer, Susan

    2017-05-01

    Research on children with autism spectrum disorders suggests differences from neurotypical children in the preference for 'social' versus 'nonsocial' sounds. Conclusions have been based largely on the use of head-turn methodology which has various limitations as a means of establishing auditory preference. In the present study, preference was assessed by measuring the frequency with which children pressed a button to hear different sounds using an interactive toy. Contrary to prior results, both groups displayed a strong preference for the highly social sounds. These findings have implications for approaches to language intervention and for theoretical debates regarding social motivation.

  4. Visual selective attention in amnestic mild cognitive impairment.

    PubMed

    McLaughlin, Paula M; Anderson, Nicole D; Rich, Jill B; Chertkow, Howard; Murtha, Susan J E

    2014-11-01

    Subtle deficits in visual selective attention have been found in amnestic mild cognitive impairment (aMCI). However, few studies have explored performance on visual search paradigms or the Simon task, which are known to be sensitive to disease severity in Alzheimer's patients. Furthermore, there is limited research investigating how deficiencies can be ameliorated with exogenous support (auditory cues). Sixteen individuals with aMCI and 14 control participants completed 3 experimental tasks that varied in demand and cue availability: visual search-alerting, visual search-orienting, and Simon task. Visual selective attention was influenced by aMCI, auditory cues, and task characteristics. Visual search abilities were relatively consistent across groups. The aMCI participants were impaired on the Simon task when working memory was required, but conflict resolution was similar to controls. Spatially informative orienting cues improved response times, whereas spatially neutral alerting cues did not influence performance. Finally, spatially informative auditory cues benefited the aMCI group more than controls in the visual search task, specifically at the largest array size where orienting demands were greatest. These findings suggest that individuals with aMCI have working memory deficits and subtle deficiencies in orienting attention and rely on exogenous information to guide attention. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Sex differences in behavioral outcomes following temperature modulation during induced neonatal hypoxic ischemic injury in rats.

    PubMed

    Smith, Amanda L; Garbus, Haley; Rosenkrantz, Ted S; Fitch, Roslyn Holly

    2015-05-22

    Neonatal hypoxia ischemia (HI; reduced oxygen and/or blood flow to the brain) can cause various degrees of tissue damage, as well as subsequent cognitive/behavioral deficits such as motor, learning/memory, and auditory impairments. These outcomes frequently result from cardiovascular and/or respiratory events observed in premature infants. Data suggests that there is a sex difference in HI outcome, with males being more adversely affected relative to comparably injured females. Brain/body temperature may play a role in modulating the severity of an HI insult, with hypothermia during an insult yielding more favorable anatomical and behavioral outcomes. The current study utilized a postnatal day (P) 7 rodent model of HI injury to assess the effect of temperature modulation during injury in each sex. We hypothesized that female P7 rats would benefit more from lowered body temperatures as compared to male P7 rats. We assessed all subjects on rota-rod, auditory discrimination, and spatial/non-spatial maze tasks. Our results revealed a significant benefit of temperature reduction in HI females as measured by most of the employed behavioral tasks. However, HI males benefitted from temperature reduction as measured on auditory and non-spatial tasks. Our data suggest that temperature reduction protects both sexes from the deleterious effects of HI injury, but task and sex specific patterns of relative efficacy are seen.

  6. The inferior colliculus encodes the Franssen auditory spatial illusion

    PubMed Central

    Rajala, Abigail Z.; Yan, Yonghe; Dent, Micheal L.; Populin, Luis C.

    2014-01-01

    Illusions are effective tools for the study of the neural mechanisms underlying perception because neural responses can be correlated to the physical properties of stimuli and the subject’s perceptions. The Franssen illusion (FI) is an auditory spatial illusion evoked by presenting a transient, abrupt tone and a slowly rising, sustained tone of the same frequency simultaneously on opposite sides of the subject. Perception of the FI consists of hearing a single sound, the sustained tone, on the side that the transient was presented. Both subcortical and cortical mechanisms for the FI have been proposed, but, to date, there is no direct evidence for either. The data show that humans and rhesus monkeys perceive the FI similarly. Recordings were taken from single units of the inferior colliculus in the monkey while they indicated the perceived location of sound sources with their gaze. The results show that the transient component of the Franssen stimulus, with a shorter first spike latency and higher discharge rate than the sustained tone, encodes the perception of sound location. Furthermore, the persistent erroneous perception of the sustained stimulus location is due to continued excitation of the same neurons, first activated by the transient, by the sustained stimulus without location information. These results demonstrate for the first time, on a trial-by-trial basis, a correlation between perception of an auditory spatial illusion and a subcortical physiological substrate. PMID:23899307

  7. Exogenous spatial attention decreases audiovisual integration.

    PubMed

    Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W

    2015-02-01

    Multisensory integration (MSI) and spatial attention are both mechanisms through which the processing of sensory information can be facilitated. Studies on the interaction between spatial attention and MSI have mainly focused on the interaction between endogenous spatial attention and MSI. Most of these studies have shown that endogenously attending a multisensory target enhances MSI. It is currently unclear, however, whether and how exogenous spatial attention and MSI interact. In the current study, we investigated the interaction between these two important bottom-up processes in two experiments. In Experiment 1 the target location was task-relevant, and in Experiment 2 the target location was task-irrelevant. Valid or invalid exogenous auditory cues were presented before the onset of unimodal auditory, unimodal visual, and audiovisual targets. We observed reliable cueing effects and multisensory response enhancement in both experiments. To examine whether audiovisual integration was influenced by exogenous spatial attention, the amount of race model violation was compared between exogenously attended and unattended targets. In both Experiment 1 and Experiment 2, a decrease in MSI was observed when audiovisual targets were exogenously attended, compared to when they were not. The interaction between exogenous attention and MSI was less pronounced in Experiment 2. Therefore, our results indicate that exogenous attention diminishes MSI when spatial orienting is relevant. The results are discussed in terms of models of multisensory integration and attention.

  8. Assessing the Impact of Auditory Peripheral Displays for UAV Operators

    DTIC Science & Technology

    2007-11-01

    Institute of Technology,Department of Aeronautics and Astronautics,Cambridge,MA,02139 8 . PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING...unclassified Standard Form 298 (Rev. 8 -98) Prescribed by ANSI Std Z39-18 ii Table of Contents ABSTRACT... 8 Dependent Variables

  9. Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays

    NASA Astrophysics Data System (ADS)

    Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.

    2005-03-01

    We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.

  10. Mapping Frequency-Specific Tone Predictions in the Human Auditory Cortex at High Spatial Resolution.

    PubMed

    Berlot, Eva; Formisano, Elia; De Martino, Federico

    2018-05-23

    Auditory inputs reaching our ears are often incomplete, but our brains nevertheless transform them into rich and complete perceptual phenomena such as meaningful conversations or pleasurable music. It has been hypothesized that our brains extract regularities in inputs, which enables us to predict the upcoming stimuli, leading to efficient sensory processing. However, it is unclear whether tone predictions are encoded with similar specificity as perceived signals. Here, we used high-field fMRI to investigate whether human auditory regions encode one of the most defining characteristics of auditory perception: the frequency of predicted tones. Two pairs of tone sequences were presented in ascending or descending directions, with the last tone omitted in half of the trials. Every pair of incomplete sequences contained identical sounds, but was associated with different expectations about the last tone (a high- or low-frequency target). This allowed us to disambiguate predictive signaling from sensory-driven processing. We recorded fMRI responses from eight female participants during passive listening to complete and incomplete sequences. Inspection of specificity and spatial patterns of responses revealed that target frequencies were encoded similarly during their presentations, as well as during omissions, suggesting frequency-specific encoding of predicted tones in the auditory cortex (AC). Importantly, frequency specificity of predictive signaling was observed already at the earliest levels of auditory cortical hierarchy: in the primary AC. Our findings provide evidence for content-specific predictive processing starting at the earliest cortical levels. SIGNIFICANCE STATEMENT Given the abundance of sensory information around us in any given moment, it has been proposed that our brain uses contextual information to prioritize and form predictions about incoming signals. However, there remains a surprising lack of understanding of the specificity and content of such prediction signaling; for example, whether a predicted tone is encoded with similar specificity as a perceived tone. Here, we show that early auditory regions encode the frequency of a tone that is predicted yet omitted. Our findings contribute to the understanding of how expectations shape sound processing in the human auditory cortex and provide further insights into how contextual information influences computations in neuronal circuits. Copyright © 2018 the authors 0270-6474/18/384934-09$15.00/0.

  11. Towards neural correlates of auditory stimulus processing: A simultaneous auditory evoked potentials and functional magnetic resonance study using an odd-ball paradigm

    PubMed Central

    Milner, Rafał; Rusiniak, Mateusz; Lewandowska, Monika; Wolak, Tomasz; Ganc, Małgorzata; Piątkowska-Janko, Ewa; Bogorodzki, Piotr; Skarżyński, Henryk

    2014-01-01

    Background The neural underpinnings of auditory information processing have often been investigated using the odd-ball paradigm, in which infrequent sounds (deviants) are presented within a regular train of frequent stimuli (standards). Traditionally, this paradigm has been applied using either high temporal resolution (EEG) or high spatial resolution (fMRI, PET). However, used separately, these techniques cannot provide information on both the location and time course of particular neural processes. The goal of this study was to investigate the neural correlates of auditory processes with a fine spatio-temporal resolution. A simultaneous auditory evoked potentials (AEP) and functional magnetic resonance imaging (fMRI) technique (AEP-fMRI), together with an odd-ball paradigm, were used. Material/Methods Six healthy volunteers, aged 20–35 years, participated in an odd-ball simultaneous AEP-fMRI experiment. AEP in response to acoustic stimuli were used to model bioelectric intracerebral generators, and electrophysiological results were integrated with fMRI data. Results fMRI activation evoked by standard stimuli was found to occur mainly in the primary auditory cortex. Activity in these regions overlapped with intracerebral bioelectric sources (dipoles) of the N1 component. Dipoles of the N1/P2 complex in response to standard stimuli were also found in the auditory pathway between the thalamus and the auditory cortex. Deviant stimuli induced fMRI activity in the anterior cingulate gyrus, insula, and parietal lobes. Conclusions The present study showed that neural processes evoked by standard stimuli occur predominantly in subcortical and cortical structures of the auditory pathway. Deviants activate areas non-specific for auditory information processing. PMID:24413019

  12. Investigating the role of visual and auditory search in reading and developmental dyslexia

    PubMed Central

    Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane

    2013-01-01

    It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a “serial” search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d′) strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in “serial” search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills. PMID:24093014

  13. Investigating the role of visual and auditory search in reading and developmental dyslexia.

    PubMed

    Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane

    2013-01-01

    It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a "serial" search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d') strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in "serial" search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills.

  14. The relationship of phonological ability, speech perception, and auditory perception in adults with dyslexia

    PubMed Central

    Law, Jeremy M.; Vandermosten, Maaike; Ghesquiere, Pol; Wouters, Jan

    2014-01-01

    This study investigated whether auditory, speech perception, and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e., rapid automatic naming, verbal short-term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM) and an amplitude rise time (RT); an intensity discrimination task (ID) was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words-in-noise tasks. Group analyses revealed significant group differences in auditory tasks (i.e., RT and ID) and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech-in-noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences. PMID:25071512

  15. Musical metaphors: evidence for a spatial grounding of non-literal sentences describing auditory events.

    PubMed

    Wolter, Sibylla; Dudschig, Carolin; de la Vega, Irmgard; Kaup, Barbara

    2015-03-01

    This study investigated whether the spatial terms high and low, when used in sentence contexts implying a non-literal interpretation, trigger similar spatial associations as would have been expected from the literal meaning of the words. In three experiments, participants read sentences describing either a high or a low auditory event (e.g., The soprano sings a high aria vs. The pianist plays a low note). In all Experiments, participants were asked to judge (yes/no) whether the sentences were meaningful by means of up/down (Experiments 1 and 2) or left/right (Experiment 3) key press responses. Contrary to previous studies reporting that metaphorical language understanding differs from literal language understanding with regard to simulation effects, the results show compatibility effects between sentence implied pitch height and response location. The results are in line with grounded models of language comprehension proposing that sensory motor experiences are being elicited when processing literal as well as non-literal sentences. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Attentional Gain Control of Ongoing Cortical Speech Representations in a “Cocktail Party”

    PubMed Central

    Kerlin, Jess R.; Shahin, Antoine J.; Miller, Lee M.

    2010-01-01

    Normal listeners possess the remarkable perceptual ability to select a single speech stream among many competing talkers. However, few studies of selective attention have addressed the unique nature of speech as a temporally extended and complex auditory object. We hypothesized that sustained selective attention to speech in a multi-talker environment would act as gain control on the early auditory cortical representations of speech. Using high-density electroencephalography and a template-matching analysis method, we found selective gain to the continuous speech content of an attended talker, greatest at a frequency of 4–8 Hz, in auditory cortex. In addition, the difference in alpha power (8–12 Hz) at parietal sites across hemispheres indicated the direction of auditory attention to speech, as has been previously found in visual tasks. The strength of this hemispheric alpha lateralization, in turn, predicted an individual’s attentional gain of the cortical speech signal. These results support a model of spatial speech stream segregation, mediated by a supramodal attention mechanism, enabling selection of the attended representation in auditory cortex. PMID:20071526

  17. Spatial selective auditory attention in the presence of reverberant energy: individual differences in normal-hearing listeners.

    PubMed

    Ruggles, Dorea; Shinn-Cunningham, Barbara

    2011-06-01

    Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.

  18. Rising tones and rustling noises: Metaphors in gestural depictions of sounds

    PubMed Central

    Scurto, Hugo; Françoise, Jules; Bevilacqua, Frédéric; Houix, Olivier; Susini, Patrick

    2017-01-01

    Communicating an auditory experience with words is a difficult task and, in consequence, people often rely on imitative non-verbal vocalizations and gestures. This work explored the combination of such vocalizations and gestures to communicate auditory sensations and representations elicited by non-vocal everyday sounds. Whereas our previous studies have analyzed vocal imitations, the present research focused on gestural depictions of sounds. To this end, two studies investigated the combination of gestures and non-verbal vocalizations. A first, observational study examined a set of vocal and gestural imitations of recordings of sounds representative of a typical everyday environment (ecological sounds) with manual annotations. A second, experimental study used non-ecological sounds whose parameters had been specifically designed to elicit the behaviors highlighted in the observational study, and used quantitative measures and inferential statistics. The results showed that these depicting gestures are based on systematic analogies between a referent sound, as interpreted by a receiver, and the visual aspects of the gestures: auditory-visual metaphors. The results also suggested a different role for vocalizations and gestures. Whereas the vocalizations reproduce all features of the referent sounds as faithfully as vocally possible, the gestures focus on one salient feature with metaphors based on auditory-visual correspondences. Both studies highlighted two metaphors consistently shared across participants: the spatial metaphor of pitch (mapping different pitches to different positions on the vertical dimension), and the rustling metaphor of random fluctuations (rapidly shaking of hands and fingers). We interpret these metaphors as the result of two kinds of representations elicited by sounds: auditory sensations (pitch and loudness) mapped to spatial position, and causal representations of the sound sources (e.g. rain drops, rustling leaves) pantomimed and embodied by the participants’ gestures. PMID:28750071

  19. Feature assignment in perception of auditory figure.

    PubMed

    Gregg, Melissa K; Samuel, Arthur G

    2012-08-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory "objects" (relatively punctate events, such as a dog's bark) and auditory "streams" (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial 2 sounds-an object (a vowel) and a stream (a series of tones)-were presented with 1 target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to 1 of the 2 sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to 3. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed. (c) 2012 APA, all rights reserved.

  20. An analysis of nonlinear dynamics underlying neural activity related to auditory induction in the rat auditory cortex.

    PubMed

    Noto, M; Nishikawa, J; Tateno, T

    2016-03-24

    A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self-exciting system is a key element for qualitatively reproducing A1 population activity and to understand the underlying mechanisms. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  1. ERP Evidence of Early Cross-Modal Links between Auditory Selective Attention and Visuo-Spatial Memory

    ERIC Educational Resources Information Center

    Bomba, Marie D.; Singhal, Anthony

    2010-01-01

    Previous dual-task research pairing complex visual tasks involving non-spatial cognitive processes during dichotic listening have shown effects on the late component (Ndl) of the negative difference selective attention waveform but no effects on the early (Nde) response suggesting that the Ndl, but not the Nde, is affected by non-spatial…

  2. Auditory Spatial Perception: Auditory Localization

    DTIC Science & Technology

    2012-05-01

    difference detectors ( Goldberg and Brown, 1969; Emanuel and Letowski, 2009). These cells are sensitive to binaural differences and perform initial coding of... variation . In particular, ~2/3 of the values (68.2%) will be within one standard deviation from the mean, i.e., within the range [μ - σ, μ + σ]. The...single loudspeaker located either close to the listener’s ears (ញ cm) or about 1 m away at 0°, 45° and/or 90° angles (e.g., ASHA, 1991; Goldberg

  3. Seasonal plasticity of auditory hair cell frequency sensitivity correlates with plasma steroid levels in vocal fish

    PubMed Central

    Rohmann, Kevin N.; Bass, Andrew H.

    2011-01-01

    SUMMARY Vertebrates displaying seasonal shifts in reproductive behavior provide the opportunity to investigate bidirectional plasticity in sensory function. The midshipman teleost fish exhibits steroid-dependent plasticity in frequency encoding by eighth nerve auditory afferents. In this study, evoked potentials were recorded in vivo from the saccule, the main auditory division of the inner ear of most teleosts, to test the hypothesis that males and females exhibit seasonal changes in hair cell physiology in relation to seasonal changes in plasma levels of steroids. Thresholds across the predominant frequency range of natural vocalizations were significantly less in both sexes in reproductive compared with non-reproductive conditions, with differences greatest at frequencies corresponding to call upper harmonics. A subset of non-reproductive males exhibiting an intermediate saccular phenotype had elevated testosterone levels, supporting the hypothesis that rising steroid levels induce non-reproductive to reproductive transitions in saccular physiology. We propose that elevated levels of steroids act via long-term (days to weeks) signaling pathways to upregulate ion channel expression generating higher resonant frequencies characteristic of non-mammalian auditory hair cells, thereby lowering acoustic thresholds. PMID:21562181

  4. Time-compressed spoken word primes crossmodally enhance processing of semantically congruent visual targets.

    PubMed

    Mahr, Angela; Wentura, Dirk

    2014-02-01

    Findings from three experiments support the conclusion that auditory primes facilitate the processing of related targets. In Experiments 1 and 2, we employed a crossmodal Stroop color identification task with auditory color words (as primes) and visual color patches (as targets). Responses were faster for congruent priming, in comparison to neutral or incongruent priming. This effect also emerged for different levels of time compression of the auditory primes (to 30 % and 10 % of the original length; i.e., 120 and 40 ms) and turned out to be even more pronounced under high-perceptual-load conditions (Exps. 1 and 2). In Experiment 3, target-present or -absent decisions for brief target displays had to be made, thereby ruling out response-priming processes as a cause of the congruency effects. Nevertheless, target detection (d') was increased by congruent primes (30 % compression) in comparison to incongruent or neutral primes. Our results suggest semantic object-based auditory-visual interactions, which rapidly increase the denoted target object's salience. This would apply, in particular, to complex visual scenes.

  5. Temporal processing dysfunction in schizophrenia.

    PubMed

    Carroll, Christine A; Boggs, Jennifer; O'Donnell, Brian F; Shekhar, Anantha; Hetrick, William P

    2008-07-01

    Schizophrenia may be associated with a fundamental disturbance in the temporal coordination of information processing in the brain, leading to classic symptoms of schizophrenia such as thought disorder and disorganized and contextually inappropriate behavior. Despite the growing interest and centrality of time-dependent conceptualizations of the pathophysiology of schizophrenia, there remains a paucity of research directly examining overt timing performance in the disorder. Accordingly, the present study investigated timing in schizophrenia using a well-established task of time perception. Twenty-three individuals with schizophrenia and 22 non-psychiatric control participants completed a temporal bisection task, which required participants to make temporal judgments about auditory and visually presented durations ranging from 300 to 600 ms. Both schizophrenia and control groups displayed greater visual compared to auditory timing variability, with no difference between groups in the visual modality. However, individuals with schizophrenia exhibited less temporal precision than controls in the perception of auditory durations. These findings correlated with parameter estimates obtained from a quantitative model of time estimation, and provide evidence of a fundamental deficit in temporal auditory precision in schizophrenia.

  6. Atypical vertical sound localization and sound-onset sensitivity in people with autism spectrum disorders.

    PubMed

    Visser, Eelke; Zwiers, Marcel P; Kan, Cornelis C; Hoekstra, Liesbeth; van Opstal, A John; Buitelaar, Jan K

    2013-11-01

    Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs.

  7. P50 suppression in children with selective mutism: a preliminary report.

    PubMed

    Henkin, Yael; Feinholz, Maya; Arie, Miri; Bar-Haim, Yair

    2010-01-01

    Evidence suggests that children with selective mutism (SM) display significant aberrations in auditory efferent activity at the brainstem level that may underlie inefficient auditory processing during vocalization, and lead to speech avoidance. The objective of the present study was to explore auditory filtering processes at the cortical level in children with SM. The classic paired-click paradigm was utilized to assess suppression of the P50 event-related potential to the second, of two sequentially-presented clicks, in ten children with SM and 10 control children. A significant suppression of P50 to the second click was evident in the SM group, whereas no suppression effect was observed in controls. Suppression was evident in 90% of the SM group and in 40% of controls, whereas augmentation was found in 10% and 60%, respectively, yielding a significant association between group and suppression of P50. P50 to the first click was comparable in children with SM and controls. The adult-like, mature P50 suppression effect exhibited by children with SM may reflect a cortical mechanism of compensatory inhibition of irrelevant repetitive information that was not properly suppressed at lower levels of their auditory system. The current data extends our previous findings suggesting that differential auditory processing may be involved in speech selectivity in SM.

  8. Collaborator Support: Identification of Fundamental Visual, Auditory, and Cognitive Requirements for Command and Control Environments

    DTIC Science & Technology

    2009-04-01

    generally involve display design for combat vehicles, such as aircraft or tanks. The strength of such displays is their non-intrusive nature , which...level .05 was used. 7 Table 1: ANOVA Summary Table for Reaction Time Due to a significant violation of the assumption of sphericity (p=.041...the Greenhouse- Geiser test was used to adjust the degrees of freedom. A significant main effect for cue type was found, F(1.693, 15.233) = 9.077

  9. 3D hierarchical spatial representation and memory of multimodal sensory data

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine/robot degrees of freedom, the desired movements and action can be computed from these different levels in the hierarchy. The most basic embodiment of this machine could be a pan-tilt camera system, an array of microphones, a machine with arm/hand like structure or/and a robot with some or all of the above capabilities. We describe the approach, system and present preliminary results on a real-robotic platform.

  10. Auditory display of knee-joint vibration signals

    NASA Astrophysics Data System (ADS)

    Krishnan, Sridhar; Rangayyan, Rangaraj M.; Bell, G. Douglas; Frank, Cyril B.

    2001-12-01

    Sounds generated due to rubbing of knee-joint surfaces may lead to a potential tool for noninvasive assessment of articular cartilage degeneration. In the work reported in the present paper, an attempt is made to perform computer-assisted auscultation of knee joints by auditory display (AD) of vibration signals (also known as vibroarthrographic or VAG signals) emitted during active movement of the leg. Two types of AD methods are considered: audification and sonification. In audification, the VAG signals are scaled in time and frequency using a time-frequency distribution to facilitate aural analysis. In sonification, the instantaneous mean frequency and envelope of the VAG signals are derived and used to synthesize sounds that are expected to facilitate more accurate diagnosis than the original signals by improving their aural quality. Auditory classification experiments were performed by two orthopedic surgeons with 37 VAG signals including 19 normal and 18 abnormal cases. Sensitivity values (correct detection of abnormality) of 31%, 44%, and 83%, and overall classification accuracies of 53%, 40%, and 57% were obtained with the direct playback, audification, and sonification methods, respectively. The corresponding d' scores were estimated to be 1.10, -0.36, and 0.55. The high sensitivity of the sonification method indicates that the technique could lead to improved detection of knee-joint abnormalities; however, additional work is required to improve its specificity and achieve better overall performance.

  11. Frequency locking in auditory hair cells: Distinguishing between additive and parametric forcing

    NASA Astrophysics Data System (ADS)

    Edri, Yuval; Bozovic, Dolores; Yochelis, Arik

    2016-10-01

    The auditory system displays remarkable sensitivity and frequency discrimination, attributes shown to rely on an amplification process that involves a mechanical as well as a biochemical response. Models that display proximity to an oscillatory onset (also known as Hopf bifurcation) exhibit a resonant response to distinct frequencies of incoming sound, and can explain many features of the amplification phenomenology. To understand the dynamics of this resonance, frequency locking is examined in a system near the Hopf bifurcation and subject to two types of driving forces: additive and parametric. Derivation of a universal amplitude equation that contains both forcing terms enables a study of their relative impact on the hair cell response. In the parametric case, although the resonant solutions are 1 : 1 frequency locked, they show the coexistence of solutions obeying a phase shift of π, a feature typical of the 2 : 1 resonance. Different characteristics are predicted for the transition from unlocked to locked solutions, leading to smooth or abrupt dynamics in response to different types of forcing. The theoretical framework provides a more realistic model of the auditory system, which incorporates a direct modulation of the internal control parameter by an applied drive. The results presented here can be generalized to many other media, including Faraday waves, chemical reactions, and elastically driven cardiomyocytes, which are known to exhibit resonant behavior.

  12. The Role of Audio-Visual Feedback in a Thought-Based Control of a Humanoid Robot: A BCI Study in Healthy and Spinal Cord Injured People.

    PubMed

    Tidoni, Emmanuele; Gergondet, Pierre; Fusco, Gabriele; Kheddar, Abderrahmane; Aglioti, Salvatore M

    2017-06-01

    The efficient control of our body and successful interaction with the environment are possible through the integration of multisensory information. Brain-computer interface (BCI) may allow people with sensorimotor disorders to actively interact in the world. In this study, visual information was paired with auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and spinal cord injured (SCI) people were asked to embody a humanoid robot and complete a pick-and-place task by means of a visual evoked potentials BCI system. Participants observed the remote environment from the robot's perspective through a head mounted display. Human-footsteps and computer-beep sounds were used as synchronous/asynchronous auditory feedback. Healthy participants achieved better placing accuracy when listening to human footstep sounds relative to a computer-generated sound. SCI people demonstrated more difficulty in steering the robot during asynchronous auditory feedback conditions. Importantly, subjective reports highlighted that the BCI mask overlaying the display did not limit the observation of the scenario and the feeling of being in control of the robot. Overall, the data seem to suggest that sensorimotor-related information may improve the control of external devices. Further studies are required to understand how the contribution of residual sensory channels could improve the reliability of BCI systems.

  13. The cognitive science of visual-spatial displays: implications for design.

    PubMed

    Hegarty, Mary

    2011-07-01

    This paper reviews cognitive science perspectives on the design of visual-spatial displays and introduces the other papers in this topic. It begins by classifying different types of visual-spatial displays, followed by a discussion of ways in which visual-spatial displays augment cognition and an overview of the perceptual and cognitive processes involved in using displays. The paper then argues for the importance of cognitive science methods to the design of visual displays and reviews some of the main principles of display design that have emerged from these approaches to date. Cognitive scientists have had good success in characterizing the performance of well-defined tasks with relatively simple visual displays, but many challenges remain in understanding the use of complex displays for ill-defined tasks. Current research exemplified by the papers in this topic extends empirical approaches to new displays and domains, informs the development of general principles of graphic design, and addresses current challenges in display design raised by the recent explosion in availability of complex data sets and new technologies for visualizing and interacting with these data. Copyright © 2011 Cognitive Science Society, Inc.

  14. Spatial awareness comparisons between large-screen, integrated pictorial displays and conventional EFIS displays during simulated landing approaches

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.; Nold, Dean E.

    1994-01-01

    An extensive simulation study was performed to determine and compare the spatial awareness of commercial airline pilots on simulated landing approaches using conventional flight displays with their awareness using advanced pictorial 'pathway in the sky' displays. Sixteen commercial airline pilots repeatedly made simulated complex microwave landing system approaches to closely spaced parallel runways with an extremely short final segment. Scenarios involving conflicting traffic situation assessments and recoveries from flight path offset conditions were used to assess spatial awareness (own ship position relative the the desired flight route, the runway, and other traffic) with the various display formats. The situation assessment tools are presented, as well as the experimental designs and the results. The results demonstrate that the integrated pictorial displays substantially increase spatial awareness over conventional electronic flight information systems display formats.

  15. Frequency organization and responses to complex sounds in the medial geniculate body of the mustached bat.

    PubMed

    Wenstrup, J J

    1999-11-01

    The auditory cortex of the mustached bat (Pteronotus parnellii) displays some of the most highly developed physiological and organizational features described in mammalian auditory cortex. This study examines response properties and organization in the medial geniculate body (MGB) that may contribute to these features of auditory cortex. About 25% of 427 auditory responses had simple frequency tuning with single excitatory tuning curves. The remainder displayed more complex frequency tuning using two-tone or noise stimuli. Most of these were combination-sensitive, responsive to combinations of different frequency bands within sonar or social vocalizations. They included FM-FM neurons, responsive to different harmonic elements of the frequency modulated (FM) sweep in the sonar signal, and H1-CF neurons, responsive to combinations of the bat's first sonar harmonic (H1) and a higher harmonic of the constant frequency (CF) sonar signal. Most combination-sensitive neurons (86%) showed facilitatory interactions. Neurons tuned to frequencies outside the biosonar range also displayed combination-sensitive responses, perhaps related to analyses of social vocalizations. Complex spectral responses were distributed throughout dorsal and ventral divisions of the MGB, forming a major feature of this bat's analysis of complex sounds. The auditory sector of the thalamic reticular nucleus also was dominated by complex spectral responses to sounds. The ventral division was organized tonotopically, based on best frequencies of singly tuned neurons and higher best frequencies of combination-sensitive neurons. Best frequencies were lowest ventrolaterally, increasing dorsally and then ventromedially. However, representations of frequencies associated with higher harmonics of the FM sonar signal were reduced greatly. Frequency organization in the dorsal division was not tonotopic; within the middle one-third of MGB, combination-sensitive responses to second and third harmonic CF sonar signals (60-63 and 90-94 kHz) occurred in adjacent regions. In the rostral one-third, combination-sensitive responses to second, third, and fourth harmonic FM frequency bands predominated. These FM-FM neurons, thought to be selective for delay between an emitted pulse and echo, showed some organization of delay selectivity. The organization of frequency sensitivity in the MGB suggests a major rewiring of the output of the central nucleus of the inferior colliculus, by which collicular neurons tuned to the bat's FM sonar signals mostly project to the dorsal, not the ventral, division. Because physiological differences between collicular and MGB neurons are minor, a major role of the tecto-thalamic projection in the mustached bat may be the reorganization of responses to provide for cortical representations of sonar target features.

  16. Audio Spatial Representation Around the Body

    PubMed Central

    Aggius-Vella, Elena; Campus, Claudio; Finocchietti, Sara; Gori, Monica

    2017-01-01

    Studies have found that portions of space around our body are differently coded by our brain. Numerous works have investigated visual and auditory spatial representation, focusing mostly on the spatial representation of stimuli presented at head level, especially in the frontal space. Only few studies have investigated spatial representation around the entire body and its relationship with motor activity. Moreover, it is still not clear whether the space surrounding us is represented as a unitary dimension or whether it is split up into different portions, differently shaped by our senses and motor activity. To clarify these points, we investigated audio localization of dynamic and static sounds at different body levels. In order to understand the role of a motor action in auditory space representation, we asked subjects to localize sounds by pointing with the hand or the foot, or by giving a verbal answer. We found that the audio sound localization was different depending on the body part considered. Moreover, a different pattern of response was observed when subjects were asked to make actions with respect to the verbal responses. These results suggest that the audio space around our body is split in various spatial portions, which are perceived differently: front, back, around chest, and around foot, suggesting that these four areas could be differently modulated by our senses and our actions. PMID:29249999

  17. Acoustic and higher-level representations of naturalistic auditory scenes in human auditory and frontal cortex.

    PubMed

    Hausfeld, Lars; Riecke, Lars; Formisano, Elia

    2018-06-01

    Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories. In this high-field fMRI study we presented participants with simultaneous voices and musical instruments while manipulating their focus of attention. We found an attentional enhancement of neural sound representations in temporal cortex - as defined by spatial activation patterns - at locations that depended on the attended category (i.e., voices or instruments). In contrast, we found that in frontal cortex the site of enhancement was independent of the attended category and the same regions could flexibly represent any attended sound regardless of its category. These results are relevant to elucidate the interacting mechanisms of bottom-up and top-down processing when listening to real-life scenes comprised of multiple sound categories. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Blast-Induced Tinnitus and Elevated Central Auditory and Limbic Activity in Rats: A Manganese-Enhanced MRI and Behavioral Study.

    PubMed

    Ouyang, Jessica; Pace, Edward; Lepczyk, Laura; Kaufman, Michael; Zhang, Jessica; Perrine, Shane A; Zhang, Jinsheng

    2017-07-07

    Blast-induced tinitus is the number one service-connected disability that currently affects military personnel and veterans. To elucidate its underlying mechanisms, we subjected 13 Sprague Dawley adult rats to unilateral 14 psi blast exposure to induce tinnitus and measured auditory and limbic brain activity using manganese-enhanced MRI (MEMRI). Tinnitus was evaluated with a gap detection acoustic startle reflex paradigm, while hearing status was assessed with prepulse inhibition (PPI) and auditory brainstem responses (ABRs). Both anxiety and cognitive functioning were assessed using elevated plus maze and Morris water maze, respectively. Five weeks after blast exposure, 8 of the 13 blasted rats exhibited chronic tinnitus. While acoustic PPI remained intact and ABR thresholds recovered, the ABR wave P1-N1 amplitude reduction persisted in all blast-exposed rats. No differences in spatial cognition were observed, but blasted rats as a whole exhibited increased anxiety. MEMRI data revealed a bilateral increase in activity along the auditory pathway and in certain limbic regions of rats with tinnitus compared to age-matched controls. Taken together, our data suggest that while blast-induced tinnitus may play a role in auditory and limbic hyperactivity, the non-auditory effects of blast and potential traumatic brain injury may also exert an effect.

  19. Neural preservation underlies speech improvement from auditory deprivation in young cochlear implant recipients.

    PubMed

    Feng, Gangyi; Ingvalson, Erin M; Grieco-Calub, Tina M; Roberts, Megan Y; Ryan, Maura E; Birmingham, Patrick; Burrowes, Delilah; Young, Nancy M; Wong, Patrick C M

    2018-01-30

    Although cochlear implantation enables some children to attain age-appropriate speech and language development, communicative delays persist in others, and outcomes are quite variable and difficult to predict, even for children implanted early in life. To understand the neurobiological basis of this variability, we used presurgical neural morphological data obtained from MRI of individual pediatric cochlear implant (CI) candidates implanted younger than 3.5 years to predict variability of their speech-perception improvement after surgery. We first compared neuroanatomical density and spatial pattern similarity of CI candidates to that of age-matched children with normal hearing, which allowed us to detail neuroanatomical networks that were either affected or unaffected by auditory deprivation. This information enables us to build machine-learning models to predict the individual children's speech development following CI. We found that regions of the brain that were unaffected by auditory deprivation, in particular the auditory association and cognitive brain regions, produced the highest accuracy, specificity, and sensitivity in patient classification and the most precise prediction results. These findings suggest that brain areas unaffected by auditory deprivation are critical to developing closer to typical speech outcomes. Moreover, the findings suggest that determination of the type of neural reorganization caused by auditory deprivation before implantation is valuable for predicting post-CI language outcomes for young children.

  20. The effects of neurofeedback on oscillatory processes related to tinnitus.

    PubMed

    Hartmann, Thomas; Lorenz, Isabel; Müller, Nadia; Langguth, Berthold; Weisz, Nathan

    2014-01-01

    Although widely used, no proof exists for the feasibility of neurofeedback for reinstating the disordered excitatory-inhibitory balance, marked by a decrease in auditory alpha power, in tinnitus patients. The current study scrutinizes the ability of neurofeedback to focally increase alpha power in auditory areas in comparison to the more common rTMS. Resting-state MEG was measured before and after neurofeedback (n = 8) and rTMS (n = 9) intervention respectively. Source level power and functional connectivity were analyzed with a focus on the alpha band. Only neurofeedback produced a significant decrease in tinnitus symptoms and-more important for the context of the study-a spatially circumscribed increase in alpha power in right auditory regions. Connectivity analysis revealed higher outgoing connectivity in a region ultimately neighboring the area in which power increases were observed. Neurofeedback decreases tinnitus symptoms and increases alpha power in a spatially circumscribed manner. In addition, compared to a more established brain stimulation-based intervention, neurofeedback is a promising approach to renormalize the excitatory-inhibitory imbalance putatively underlying tinnitus. This study is the first to demonstrate the feasibility of focally enhancing alpha activity in tinnitus patients by means of neurofeedback.

  1. ERGONOMICS ABSTRACTS 48983-49619.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    THE LITERATURE OF ERGONOMICS, OR BIOTECHNOLOGY, IS CLASSIFIED INTO 15 AREAS--METHODS, SYSTEMS OF MEN AND MACHINES, VISUAL AND AUDITORY AND OTHER INPUTS AND PROCESSES, INPUT CHANNELS, BODY MEASUREMENTS, DESIGN OF CONTROLS AND INTEGRATION WITH DISPLAYS, LAYOUT OF PANELS AND CONSOLES, DESIGN OF WORK SPACE, CLOTHING AND PERSONAL EQUIPMENT, SPECIAL…

  2. The use of listening devices to ameliorate auditory deficit in children with autism.

    PubMed

    Rance, Gary; Saunders, Kerryn; Carew, Peter; Johansson, Marlin; Tan, Johanna

    2014-02-01

    To evaluate both monaural and binaural processing skills in a group of children with autism spectrum disorder (ASD) and to determine the degree to which personal frequency modulation (radio transmission) (FM) listening systems could ameliorate their listening difficulties. Auditory temporal processing (amplitude modulation detection), spatial listening (integration of binaural difference cues), and functional hearing (speech perception in background noise) were evaluated in 20 children with ASD. Ten of these subsequently underwent a 6-week device trial in which they wore the FM system for up to 7 hours per day. Auditory temporal processing and spatial listening ability were poorer in subjects with ASD than in matched controls (temporal: P = .014 [95% CI -6.4 to -0.8 dB], spatial: P = .003 [1.0 to 4.4 dB]), and performance on both of these basic processing measures was correlated with speech perception ability (temporal: r = -0.44, P = .022; spatial: r = -0.50, P = .015). The provision of FM listening systems resulted in improved discrimination of speech in noise (P < .001 [11.6% to 21.7%]). Furthermore, both participant and teacher questionnaire data revealed device-related benefits across a range of evaluation categories including Effect of Background Noise (P = .036 [-60.7% to -2.8%]) and Ease of Communication (P = .019 [-40.1% to -5.0%]). Eight of the 10 participants who undertook the 6-week device trial remained consistent FM users at study completion. Sustained use of FM listening devices can enhance speech perception in noise, aid social interaction, and improve educational outcomes in children with ASD. Copyright © 2014 Mosby, Inc. All rights reserved.

  3. Three dimensional audio versus head down TCAS displays

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Pittman, Marc T.

    1994-01-01

    The advantage of a head up auditory display was evaluated in an experiment designed to measure and compare the acquisition time for capturing visual targets under two conditions: Standard head down traffic collision avoidance system (TCAS) display, and three-dimensional (3-D) audio TCAS presentation. Ten commercial airline crews were tested under full mission simulation conditions at the NASA Ames Crew-Vehicle Systems Research Facility Advanced Concepts Flight Simulator. Scenario software generated targets corresponding to aircraft which activated a 3-D aural advisory or a TCAS advisory. Results showed a significant difference in target acquisition time between the two conditions, favoring the 3-D audio TCAS condition by 500 ms.

  4. Cognitive considerations for helmet-mounted display design

    NASA Astrophysics Data System (ADS)

    Francis, Gregory; Rash, Clarence E.

    2010-04-01

    Helmet-mounted displays (HMDs) are designed as a tool to increase performance. To achieve this, there must be an accurate transfer of information from the HMD to the user. Ideally, an HMD would be designed to accommodate the abilities and limitations of users' cognitive processes. It is not enough for the information (whether visual, auditory, or tactual) to be displayed; the information must be perceived, attended, remembered, and organized in a way that guides appropriate decision-making, judgment, and action. Following a general overview, specific subtopics of cognition, including perception, attention, memory, knowledge, decision-making, and problem solving are explored within the context of HMDs.

  5. Neurofeedback-Based Enhancement of Single-Trial Auditory Evoked Potentials: Treatment of Auditory Verbal Hallucinations in Schizophrenia.

    PubMed

    Rieger, Kathryn; Rarra, Marie-Helene; Diaz Hernandez, Laura; Hubl, Daniela; Koenig, Thomas

    2018-03-01

    Auditory verbal hallucinations depend on a broad neurobiological network ranging from the auditory system to language as well as memory-related processes. As part of this, the auditory N100 event-related potential (ERP) component is attenuated in patients with schizophrenia, with stronger attenuation occurring during auditory verbal hallucinations. Changes in the N100 component assumingly reflect disturbed responsiveness of the auditory system toward external stimuli in schizophrenia. With this premise, we investigated the therapeutic utility of neurofeedback training to modulate the auditory-evoked N100 component in patients with schizophrenia and associated auditory verbal hallucinations. Ten patients completed electroencephalography neurofeedback training for modulation of N100 (treatment condition) or another unrelated component, P200 (control condition). On a behavioral level, only the control group showed a tendency for symptom improvement in the Positive and Negative Syndrome Scale total score in a pre-/postcomparison ( t (4) = 2.71, P = .054); however, no significant differences were found in specific hallucination related symptoms ( t (7) = -0.53, P = .62). There was no significant overall effect of neurofeedback training on ERP components in our paradigm; however, we were able to identify different learning patterns, and found a correlation between learning and improvement in auditory verbal hallucination symptoms across training sessions ( r = 0.664, n = 9, P = .05). This effect results, with cautious interpretation due to the small sample size, primarily from the treatment group ( r = 0.97, n = 4, P = .03). In particular, a within-session learning parameter showed utility for predicting symptom improvement with neurofeedback training. In conclusion, patients with schizophrenia and associated auditory verbal hallucinations who exhibit a learning pattern more characterized by within-session aptitude may benefit from electroencephalography neurofeedback. Furthermore, independent of the training group, a significant spatial pre-post difference was found in the event-related component P200 ( P = .04).

  6. Multisensory and Modality-Specific Influences on Adaptation to Optical Prisms

    PubMed Central

    Calzolari, Elena; Albini, Federica; Bolognini, Nadia; Vallar, Giuseppe

    2017-01-01

    Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target. Visual, auditory, and multisensory – audio-visual – targets in the adaptation phase were used, while participants wore prisms displacing the visual field rightward by 10°. Proprioceptive, visual, visual-proprioceptive, auditory-proprioceptive straight-ahead shifts were measured. Pointing to auditory and to audio-visual targets in the adaptation phase produces proprioceptive, visual-proprioceptive, and auditory-proprioceptive AEs, as the typical visual targets did. This finding reveals that cross-modal plasticity effects involve both the auditory and the visual modality, and their interactions (Experiment 1). Even a shortened PA phase, requiring only 24 pointings to visual and audio-visual targets (Experiment 2), is sufficient to bring about AEs, as compared to the standard 92-pointings procedure. Finally, pointings to auditory targets cause AEs, although PA with a reduced number of pointings (24) to auditory targets brings about smaller AEs, as compared to the 92-pointings procedure (Experiment 3). Together, results from the three experiments extend to the auditory modality the sensorimotor plasticity underlying the typical AEs produced by PA to visual targets. Importantly, PA to auditory targets appears characterized by less accurate pointings and error correction, suggesting that the auditory component of the PA process may be less central to the building up of the AEs, than the sensorimotor pointing activity per se. These findings highlight both the effectiveness of a reduced number of pointings for bringing about AEs, and the possibility of inducing PA with auditory targets, which may be used as a compensatory route in patients with visual deficits. PMID:29213233

  7. Different mechanisms are responsible for dishabituation of electrophysiological auditory responses to a change in acoustic identity than to a change in stimulus location.

    PubMed

    Smulders, Tom V; Jarvis, Erich D

    2013-11-01

    Repeated exposure to an auditory stimulus leads to habituation of the electrophysiological and immediate-early-gene (IEG) expression response in the auditory system. A novel auditory stimulus reinstates this response in a form of dishabituation. This has been interpreted as the start of new memory formation for this novel stimulus. Changes in the location of an otherwise identical auditory stimulus can also dishabituate the IEG expression response. This has been interpreted as an integration of stimulus identity and stimulus location into a single auditory object, encoded in the firing patterns of the auditory system. In this study, we further tested this hypothesis. Using chronic multi-electrode arrays to record multi-unit activity from the auditory system of awake and behaving zebra finches, we found that habituation occurs to repeated exposure to the same song and dishabituation with a novel song, similar to that described in head-fixed, restrained animals. A large proportion of recording sites also showed dishabituation when the same auditory stimulus was moved to a novel location. However, when the song was randomly moved among 8 interleaved locations, habituation occurred independently of the continuous changes in location. In contrast, when 8 different auditory stimuli were interleaved all from the same location, a separate habituation occurred to each stimulus. This result suggests that neuronal memories of the acoustic identity and spatial location are different, and that allocentric location of a stimulus is not encoded as part of the memory for an auditory object, while its acoustic properties are. We speculate that, instead, the dishabituation that occurs with a change from a stable location of a sound is due to the unexpectedness of the location change, and might be due to different underlying mechanisms than the dishabituation and separate habituations to different acoustic stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. The Acuity of Echolocation: Spatial Resolution in Sighted Persons Compared to the Performance of an Expert Who Is Blind

    ERIC Educational Resources Information Center

    Teng, Santani; Whitney, David

    2011-01-01

    Echolocation is a specialized application of spatial hearing that uses reflected auditory information to localize objects and represent the external environment. Although it has been documented extensively in nonhuman species, such as bats and dolphins, its use by some persons who are blind as a navigation and object-identification aid has…

  9. Focusing, Sustaining, and Switching Attention

    DTIC Science & Technology

    2013-09-12

    selective attention . Aim 2. Effects on non-spatial features. We measured whether selective attention to an ongoing target improves with time when...talk]. Shinn-Cunningham BG (2013). "Spatial hearing in rooms: Effects on selective auditory attention and sound localization," Neuroscience for...hypothesis that room reverberation interferes with selective attention , 2) whether selective attention to an ongoing target improves with time when

  10. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds

    PubMed Central

    Dietz, Mathias; Marquardt, Torsten; Salminen, Nelli H.; McAlpine, David

    2013-01-01

    The ability to locate the direction of a target sound in a background of competing sources is critical to the survival of many species and important for human communication. Nevertheless, brain mechanisms that provide for such accurate localization abilities remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information directly from the source when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. We developed a stimulus mimicking the mutual relationship of sound amplitude and binaural cues, characteristic to reverberant speech. This stimulus, named amplitude modulated binaural beat, allows for a parametric and isolated change of modulation frequency and phase relations. Employing magnetoencephalography and psychoacoustics it is demonstrated that the auditory brain uses binaural information in the stimulus fine structure only during the rising portion of each modulation cycle, rendering spatial information recoverable in an otherwise unlocalizable sound. The data suggest that amplitude modulation provides a means of “glimpsing” low-frequency spatial cues in a manner that benefits listening in noisy or reverberant environments. PMID:23980161

  11. Using spatialized sound cues in an auditorily rich environment

    NASA Astrophysics Data System (ADS)

    Brock, Derek; Ballas, James A.; Stroup, Janet L.; McClimens, Brian

    2004-05-01

    Previous Navy research has demonstrated that spatialized sound cues in an otherwise quiet setting are useful for directing attention and improving performance by 16.8% or more in the decision component of a complex dual-task. To examine whether the benefits of this technique are undermined in the presence of additional, unrelated sounds, a background recording of operations in a Navy command center and a voice communications response task [Bolia et al., J. Acoust. Soc. Am. 107, 1065-1066 (2000)] were used to simulate the conditions of an auditorily rich military environment. Without the benefit of spatialized sound cues, performance in the presence of this extraneous auditory information, as measured by decision response times, was an average of 13.6% worse than baseline performance in an earlier study. Performance improved when the cues were present by an average of 18.3%, but this improvement remained below the improvement observed in the baseline study by an average of 11.5%. It is concluded that while the two types of extraneous sound information used in this study degrade performance in the decision task, there is no interaction with the relative performance benefit provided by the use of spatialized auditory cues. [Work supported by ONR.

  12. Effects of divided attention and operating room noise on perception of pulse oximeter pitch changes: a laboratory study.

    PubMed

    Stevenson, Ryan A; Schlesinger, Joseph J; Wallace, Mark T

    2013-02-01

    Anesthesiology requires performing visually oriented procedures while monitoring auditory information about a patient's vital signs. A concern in operating room environments is the amount of competing information and the effects that divided attention has on patient monitoring, such as detecting auditory changes in arterial oxygen saturation via pulse oximetry. The authors measured the impact of visual attentional load and auditory background noise on the ability of anesthesia residents to monitor the pulse oximeter auditory display in a laboratory setting. Accuracies and response times were recorded reflecting anesthesiologists' abilities to detect changes in oxygen saturation across three levels of visual attention in quiet and with noise. Results show that visual attentional load substantially affects the ability to detect changes in oxygen saturation concentrations conveyed by auditory cues signaling 99 and 98% saturation. These effects are compounded by auditory noise, up to a 17% decline in performance. These deficits are seen in the ability to accurately detect a change in oxygen saturation and in speed of response. Most anesthesia accidents are initiated by small errors that cascade into serious events. Lack of monitor vigilance and inattention are two of the more commonly cited factors. Reducing such errors is thus a priority for improving patient safety. Specifically, efforts to reduce distractors and decrease background noise should be considered during induction and emergence, periods of especially high risk, when anesthesiologists has to attend to many tasks and are thus susceptible to error.

  13. Alpha Rhythms in Audition: Cognitive and Clinical Perspectives

    PubMed Central

    Weisz, Nathan; Hartmann, Thomas; Müller, Nadia; Lorenz, Isabel; Obleser, Jonas

    2011-01-01

    Like the visual and the sensorimotor systems, the auditory system exhibits pronounced alpha-like resting oscillatory activity. Due to the relatively small spatial extent of auditory cortical areas, this rhythmic activity is less obvious and frequently masked by non-auditory alpha-generators when recording non-invasively using magnetoencephalography (MEG) or electroencephalography (EEG). Following stimulation with sounds, marked desynchronizations can be observed between 6 and 12 Hz, which can be localized to the auditory cortex. However knowledge about the functional relevance of the auditory alpha rhythm has remained scarce so far. Results from the visual and sensorimotor system have fuelled the hypothesis of alpha activity reflecting a state of functional inhibition. The current article pursues several intentions: (1) Firstly we review and present own evidence (MEG, EEG, sEEG) for the existence of an auditory alpha-like rhythm independent of visual or motor generators, something that is occasionally met with skepticism. (2) In a second part we will discuss tinnitus and how this audiological symptom may relate to reduced background alpha. The clinical part will give an introduction into a method which aims to modulate neurophysiological activity hypothesized to underlie this distressing disorder. Using neurofeedback, one is able to directly target relevant oscillatory activity. Preliminary data point to a high potential of this approach for treating tinnitus. (3) Finally, in a cognitive neuroscientific part we will show that auditory alpha is modulated by anticipation/expectations with and without auditory stimulation. We will also introduce ideas and initial evidence that alpha oscillations are involved in the most complex capability of the auditory system, namely speech perception. The evidence presented in this article corroborates findings from other modalities, indicating that alpha-like activity functionally has an universal inhibitory role across sensory modalities. PMID:21687444

  14. Functional dissociation between regularity encoding and deviance detection along the auditory hierarchy.

    PubMed

    Aghamolaei, Maryam; Zarnowiec, Katarzyna; Grimm, Sabine; Escera, Carles

    2016-02-01

    Auditory deviance detection based on regularity encoding appears as one of the basic functional properties of the auditory system. It has traditionally been assessed with the mismatch negativity (MMN) long-latency component of the auditory evoked potential (AEP). Recent studies have found earlier correlates of deviance detection based on regularity encoding. They occur in humans in the first 50 ms after sound onset, at the level of the middle-latency response of the AEP, and parallel findings of stimulus-specific adaptation observed in animal studies. However, the functional relationship between these different levels of regularity encoding and deviance detection along the auditory hierarchy has not yet been clarified. Here we addressed this issue by examining deviant-related responses at different levels of the auditory hierarchy to stimulus changes varying in their degree of deviation regarding the spatial location of a repeated standard stimulus. Auditory stimuli were presented randomly from five loudspeakers at azimuthal angles of 0°, 12°, 24°, 36° and 48° during oddball and reversed-oddball conditions. Middle-latency responses and MMN were measured. Our results revealed that middle-latency responses were sensitive to deviance but not the degree of deviation, whereas the MMN amplitude increased as a function of deviance magnitude. These findings indicated that acoustic regularity can be encoded at the level of the middle-latency response but that it takes a higher step in the auditory hierarchy for deviance magnitude to be encoded, thus providing a functional dissociation between regularity encoding and deviance detection along the auditory hierarchy. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. An auditory attention task: a note on the processing of verbal information.

    PubMed

    Linde, L

    1994-04-01

    On an auditory attention task subjects were required to reproduce spatial relationships between letters from auditorily presented verbal information containing the prepositions "before" or "after." It was assumed that propositions containing "after" induce a conflict between temporal, and semantically implied, spatial order between letters. Data from 36 subjects showing that propositions with "after" are more difficult to process are presented. A significant, general training effect appeared. 200 mg caffeine had a certain beneficial effect on performance of 18 subjects who had been awake for about 22 hours and were tested at 6 a.m.; however, the beneficial effect was not related to amount of conflict but concerned items without and with conflict. On the other hand, the effect of caffeine for 18 subjects tested at 4 p.m. after normal sleep was slightly negative.

  16. Perception and psychological evaluation for visual and auditory environment based on the correlation mechanisms

    NASA Astrophysics Data System (ADS)

    Fujii, Kenji

    2002-06-01

    In this dissertation, the correlation mechanism in modeling the process in the visual perception is introduced. It has been well described that the correlation mechanism is effective for describing subjective attributes in auditory perception. The main result is that it is possible to apply the correlation mechanism to the process in temporal vision and spatial vision, as well as in audition. (1) The psychophysical experiment was performed on subjective flicker rates for complex waveforms. A remarkable result is that the phenomenon of missing fundamental is found in temporal vision as analogous to the auditory pitch perception. This implies the existence of correlation mechanism in visual system. (2) For spatial vision, the autocorrelation analysis provides useful measures for describing three primary perceptual properties of visual texture: contrast, coarseness, and regularity. Another experiment showed that the degree of regularity is a salient cue for texture preference judgment. (3) In addition, the autocorrelation function (ACF) and inter-aural cross-correlation function (IACF) were applied for analysis of the temporal and spatial properties of environmental noise. It was confirmed that the acoustical properties of aircraft noise and traffic noise are well described. These analyses provided useful parameters extracted from the ACF and IACF in assessing the subjective annoyance for noise. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Junko Atagi, 6813 Mosonou, Saijo-cho, Higashi-Hiroshima 739-0024, Japan. E-mail address: atagi\\@urban.ne.jp.

  17. Effects of Bone Vibrator Position on Auditory Spatial Perception Tasks.

    PubMed

    McBride, Maranda; Tran, Phuong; Pollard, Kimberly A; Letowski, Tomasz; McMillan, Garnett P

    2015-12-01

    This study assessed listeners' ability to localize spatially differentiated virtual audio signals delivered by bone conduction (BC) vibrators and circumaural air conduction (AC) headphones. Although the skull offers little intracranial sound wave attenuation, previous studies have demonstrated listeners' ability to localize auditory signals delivered by a pair of BC vibrators coupled to the mandibular condyle bones. The current study extended this research to other BC vibrator locations on the skull. Each participant listened to virtual audio signals originating from 16 different horizontal locations using circumaural headphones or BC vibrators placed in front of, above, or behind the listener's ears. The listener's task was to indicate the signal's perceived direction of origin. Localization accuracy with the BC front and BC top positions was comparable to that with the headphones, but responses for the BC back position were less accurate than both the headphones and BC front position. This study supports the conclusion of previous studies that listeners can localize virtual 3D signals equally well using AC and BC transducers. Based on these results, it is apparent that BC devices could be substituted for AC headphones with little to no localization performance degradation. BC headphones can be used when spatial auditory information needs to be delivered without occluding the ears. Although vibrator placement in front of the ears appears optimal from the localization standpoint, the top or back position may be acceptable from an operational standpoint or if the BC system is integrated into headgear. © 2015, Human Factors and Ergonomics Society.

  18. Alpha power indexes task-related networks on large and small scales: A multimodal ECoG study in humans and a non-human primate.

    PubMed

    de Pesters, A; Coon, W G; Brunner, P; Gunduz, A; Ritaccio, A L; Brunet, N M; de Weerd, P; Roberts, M J; Oostenveld, R; Fries, P; Schalk, G

    2016-07-01

    Performing different tasks, such as generating motor movements or processing sensory input, requires the recruitment of specific networks of neuronal populations. Previous studies suggested that power variations in the alpha band (8-12Hz) may implement such recruitment of task-specific populations by increasing cortical excitability in task-related areas while inhibiting population-level cortical activity in task-unrelated areas (Klimesch et al., 2007; Jensen and Mazaheri, 2010). However, the precise temporal and spatial relationships between the modulatory function implemented by alpha oscillations and population-level cortical activity remained undefined. Furthermore, while several studies suggested that alpha power indexes task-related populations across large and spatially separated cortical areas, it was largely unclear whether alpha power also differentially indexes smaller networks of task-related neuronal populations. Here we addressed these questions by investigating the temporal and spatial relationships of electrocorticographic (ECoG) power modulations in the alpha band and in the broadband gamma range (70-170Hz, indexing population-level activity) during auditory and motor tasks in five human subjects and one macaque monkey. In line with previous research, our results confirm that broadband gamma power accurately tracks task-related behavior and that alpha power decreases in task-related areas. More importantly, they demonstrate that alpha power suppression lags population-level activity in auditory areas during the auditory task, but precedes it in motor areas during the motor task. This suppression of alpha power in task-related areas was accompanied by an increase in areas not related to the task. In addition, we show for the first time that these differential modulations of alpha power could be observed not only across widely distributed systems (e.g., motor vs. auditory system), but also within the auditory system. Specifically, alpha power was suppressed in the locations within the auditory system that most robustly responded to particular sound stimuli. Altogether, our results provide experimental evidence for a mechanism that preferentially recruits task-related neuronal populations by increasing cortical excitability in task-related cortical areas and decreasing cortical excitability in task-unrelated areas. This mechanism is implemented by variations in alpha power and is common to humans and the non-human primate under study. These results contribute to an increasingly refined understanding of the mechanisms underlying the selection of the specific neuronal populations required for task execution. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    PubMed

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-10-01

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Auditory perception vs. recognition: representation of complex communication sounds in the mouse auditory cortical fields.

    PubMed

    Geissler, Diana B; Ehret, Günter

    2004-02-01

    Details of brain areas for acoustical Gestalt perception and the recognition of species-specific vocalizations are not known. Here we show how spectral properties and the recognition of the acoustical Gestalt of wriggling calls of mouse pups based on a temporal property are represented in auditory cortical fields and an association area (dorsal field) of the pups' mothers. We stimulated either with a call model releasing maternal behaviour at a high rate (call recognition) or with two models of low behavioural significance (perception without recognition). Brain activation was quantified using c-Fos immunocytochemistry, counting Fos-positive cells in electrophysiologically mapped auditory cortical fields and the dorsal field. A frequency-specific labelling in two primary auditory fields is related to call perception but not to the discrimination of the biological significance of the call models used. Labelling related to call recognition is present in the second auditory field (AII). A left hemisphere advantage of labelling in the dorsoposterior field seems to reflect an integration of call recognition with maternal responsiveness. The dorsal field is activated only in the left hemisphere. The spatial extent of Fos-positive cells within the auditory cortex and its fields is larger in the left than in the right hemisphere. Our data show that a left hemisphere advantage in processing of a species-specific vocalization up to recognition is present in mice. The differential representation of vocalizations of high vs. low biological significance, as seen only in higher-order and not in primary fields of the auditory cortex, is discussed in the context of perceptual strategies.

  1. The Influence of Phonetic Dimensions on Aphasic Speech Perception

    ERIC Educational Resources Information Center

    Hessler, Dorte; Jonkers, Roel; Bastiaanse, Roelien

    2010-01-01

    Individuals with aphasia have more problems detecting small differences between speech sounds than larger ones. This paper reports how phonemic processing is impaired and how this is influenced by speechreading. A non-word discrimination task was carried out with "audiovisual", "auditory only" and "visual only" stimulus display. Subjects had to…

  2. Design Considerations and Research Needs for Expanding the Current Perceptual Model of Spatial Orientation into an In-Cockpit Spatial Disorientation Warning System

    DTIC Science & Technology

    2016-11-30

    USAARL Report No. 2017-07 Design Considerations and Research Needs for Expanding the Current Perceptual Model of Spatial Orientation into an In...Brill5, Angus H. Rupert1 1U.S. Army Aeromedical Research Laboratory 2Laulima Government Solutions, LLC 3National AeroSpace Training and Research ...Center 4Embry-Riddle Aeronautical University 5U.S. Air Force Research Laboratory United States Army Aeromedical Research Laboratory Auditory

  3. Atypical vertical sound localization and sound-onset sensitivity in people with autism spectrum disorders

    PubMed Central

    Visser, Eelke; Zwiers, Marcel P.; Kan, Cornelis C.; Hoekstra, Liesbeth; van Opstal, A. John; Buitelaar, Jan K.

    2013-01-01

    Background Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. Methods We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Results Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. Limitations The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Conclusion Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs. PMID:24148845

  4. Auditory Resting-State Network Connectivity in Tinnitus: A Functional MRI Study

    PubMed Central

    Maudoux, Audrey; Lefebvre, Philippe; Cabay, Jean-Evrard; Demertzi, Athena; Vanhaudenhuyse, Audrey; Laureys, Steven; Soddu, Andrea

    2012-01-01

    The underlying functional neuroanatomy of tinnitus remains poorly understood. Few studies have focused on functional cerebral connectivity changes in tinnitus patients. The aim of this study was to test if functional MRI “resting-state” connectivity patterns in auditory network differ between tinnitus patients and normal controls. Thirteen chronic tinnitus subjects and fifteen age-matched healthy controls were studied on a 3 tesla MRI. Connectivity was investigated using independent component analysis and an automated component selection approach taking into account the spatial and temporal properties of each component. Connectivity in extra-auditory regions such as brainstem, basal ganglia/NAc, cerebellum, parahippocampal, right prefrontal, parietal, and sensorimotor areas was found to be increased in tinnitus subjects. The right primary auditory cortex, left prefrontal, left fusiform gyrus, and bilateral occipital regions showed a decreased connectivity in tinnitus. These results show that there is a modification of cortical and subcortical functional connectivity in tinnitus encompassing attentional, mnemonic, and emotional networks. Our data corroborate the hypothesized implication of non-auditory regions in tinnitus physiopathology and suggest that various regions of the brain seem involved in the persistent awareness of the phenomenon as well as in the development of the associated distress leading to disabling chronic tinnitus. PMID:22574141

  5. [In Process Citation

    PubMed

    Ackermann; Mathiak

    1999-11-01

    Pure word deafness (auditory verbal agnosia) is characterized by an impairment of auditory comprehension, repetition of verbal material and writing to dictation whereas spontaneous speech production and reading largely remain unaffected. Sometimes, this syndrome is preceded by complete deafness (cortical deafness) of varying duration. Perception of vowels and suprasegmental features of verbal utterances (e.g., intonation contours) seems to be less disrupted than the processing of consonants and, therefore, might mediate residual auditory functions. Often, lip reading and/or slowing of speaking rate allow within some limits to compensate for speech comprehension deficits. Apart from a few exceptions, the available reports of pure word deafness documented a bilateral temporal lesion. In these instances, as a rule, identification of nonverbal (environmental) sounds, perception of music, temporal resolution of sequential auditory cues and/or spatial localization of acoustic events were compromised as well. The observed variable constellation of auditory signs and symptoms in central hearing disorders following bilateral temporal disorders, most probably, reflects the multitude of functional maps at the level of the auditory cortices subserving, as documented in a variety of non-human species, the encoding of specific stimulus parameters each. Thus, verbal/nonverbal auditory agnosia may be considered a paradigm of distorted "auditory scene analysis" (Bregman 1990) affecting both primitive and schema-based perceptual processes. It cannot be excluded, however, that disconnection of the Wernicke-area from auditory input (Geschwind 1965) and/or an impairment of suggested "phonetic module" (Liberman 1996) contribute to the observed deficits as well. Conceivably, these latter mechanisms underly the rare cases of pure word deafness following a lesion restricted to the dominant hemisphere. Only few instances of a rather isolated disruption of the discrimination/identification of nonverbal sound sources, in the presence of uncompromised speech comprehension, have been reported so far (nonverbal auditory agnosia). As a rule, unilateral right-sided damage has been found to be the relevant lesion.

  6. Spatiotemporal dynamics of auditory attention synchronize with speech

    PubMed Central

    Wöstmann, Malte; Herrmann, Björn; Maess, Burkhard

    2016-01-01

    Attention plays a fundamental role in selectively processing stimuli in our environment despite distraction. Spatial attention induces increasing and decreasing power of neural alpha oscillations (8–12 Hz) in brain regions ipsilateral and contralateral to the locus of attention, respectively. This study tested whether the hemispheric lateralization of alpha power codes not just the spatial location but also the temporal structure of the stimulus. Participants attended to spoken digits presented to one ear and ignored tightly synchronized distracting digits presented to the other ear. In the magnetoencephalogram, spatial attention induced lateralization of alpha power in parietal, but notably also in auditory cortical regions. This alpha power lateralization was not maintained steadily but fluctuated in synchrony with the speech rate and lagged the time course of low-frequency (1–5 Hz) sensory synchronization. Higher amplitude of alpha power modulation at the speech rate was predictive of a listener’s enhanced performance of stream-specific speech comprehension. Our findings demonstrate that alpha power lateralization is modulated in tune with the sensory input and acts as a spatiotemporal filter controlling the read-out of sensory content. PMID:27001861

  7. Delayed auditory pathway maturation and prematurity.

    PubMed

    Koenighofer, Martin; Parzefall, Thomas; Ramsebner, Reinhard; Lucas, Trevor; Frei, Klemens

    2015-06-01

    Hearing loss is the most common sensory disorder in developed countries and leads to a severe reduction in quality of life. In this uncontrolled case series, we evaluated the auditory development in patients suffering from congenital nonsyndromic hearing impairment related to preterm birth. Six patients delivered preterm (25th-35th gestational weeks) suffering from mild to profound congenital nonsyndromic hearing impairment, descending from healthy, nonconsanguineous parents and were evaluated by otoacoustic emissions, tympanometry, brainstem-evoked response audiometry, and genetic testing. All patients were treated with hearing aids, and one patient required cochlear implantation. One preterm infant (32nd gestational week) initially presented with a 70 dB hearing loss, accompanied by negative otoacoustic emissions and normal tympanometric findings. The patient was treated with hearing aids and displayed a gradual improvement in bilateral hearing that completely normalized by 14 months of age accompanied by the development of otoacoustic emission responses. Conclusions We present here for the first time a fully documented preterm patient with delayed auditory pathway maturation and normalization of hearing within 14 months of birth. Although rare, postpartum development of the auditory system should, therefore, be considered in the initial stages for treating preterm hearing impaired patients.

  8. Immediate early gene expression following exposure to acoustic and visual components of courtship in zebra finches.

    PubMed

    Avey, Marc T; Phillmore, Leslie S; MacDougall-Shackleton, Scott A

    2005-12-07

    Sensory driven immediate early gene expression (IEG) has been a key tool to explore auditory perceptual areas in the avian brain. Most work on IEG expression in songbirds such as zebra finches has focused on playback of acoustic stimuli and its effect on auditory processing areas such as caudal medial mesopallium (CMM) caudal medial nidopallium (NCM). However, in a natural setting, the courtship displays of songbirds (including zebra finches) include visual as well as acoustic components. To determine whether the visual stimulus of a courting male modifies song-induced expression of the IEG ZENK in the auditory forebrain we exposed male and female zebra finches to acoustic (song) and visual (dancing) components of courtship. Birds were played digital movies with either combined audio and video, audio only, video only, or neither audio nor video (control). We found significantly increased levels of Zenk response in the auditory region CMM in the two treatment groups exposed to acoustic stimuli compared to the control group. The video only group had an intermediate response, suggesting potential effect of visual input on activity in these auditory brain regions. Finally, we unexpectedly found a lateralization of Zenk response that was independent of sex, brain region, or treatment condition, such that Zenk immunoreactivity was consistently higher in the left hemisphere than in the right and the majority of individual birds were left-hemisphere dominant.

  9. "Multisensory brand search: How the meaning of sounds guides consumers' visual attention": Correction to Knoeferle et al. (2016).

    PubMed

    2017-03-01

    Reports an error in "Multisensory brand search: How the meaning of sounds guides consumers' visual attention" by Klemens M. Knoeferle, Pia Knoeferle, Carlos Velasco and Charles Spence ( Journal of Experimental Psychology: Applied , 2016[Jun], Vol 22[2], 196-210). In the article, under Experiment 2, Design and Stimuli, the set number of target products and visual distractors reported in the second paragraph should be 20 and 13, respectively: "On each trial, the 16 products shown in the display were randomly selected from a set of 20 products belonging to different categories. Out of the set of 20 products, seven were potential targets, whereas the other 13 were used as visual distractors only throughout the experiment (since they were not linked to specific usage or consumption sounds)." Consequently, Appendix A in the supplemental materials has been updated. (The following abstract of the original article appeared in record 2016-28876-002.) Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Visualization of spiral ganglion neurites within the scala tympani with a cochlear implant in situ

    PubMed Central

    Chikar, Jennifer A.; Batts, Shelley A.; Pfingst, Bryan E.; Raphael, Yehoash

    2009-01-01

    Current cochlear histology methods do not allow in situ processing of cochlear implants. The metal components of the implant preclude standard embedding and mid-modiolar sectioning, and whole mounts do not have the spatial resolution needed to view the implant within the scala tympani. One focus of recent auditory research is the regeneration of structures within the cochlea, particularly the ganglion cells and their processes, and there are multiple potential benefits to cochlear implant users from this work. To facilitate experimental investigations of auditory nerve regeneration performed in conjunction with cochlear implantation, it is critical to visualize the cochlear tissue and the implant together to determine if the nerve has made contact with the implant. This paper presents a novel histological technique that enables simultaneous visualization of the in situ cochlear implant and neurofilament – labeled nerve processes within the scala tympani, and the spatial relationship between them. PMID:19428528

  11. Visualization of spiral ganglion neurites within the scala tympani with a cochlear implant in situ.

    PubMed

    Chikar, Jennifer A; Batts, Shelley A; Pfingst, Bryan E; Raphael, Yehoash

    2009-05-15

    Current cochlear histology methods do not allow in situ processing of cochlear implants. The metal components of the implant preclude standard embedding and mid-modiolar sectioning, and whole mounts do not have the spatial resolution needed to view the implant within the scala tympani. One focus of recent auditory research is the regeneration of structures within the cochlea, particularly the ganglion cells and their processes, and there are multiple potential benefits to cochlear implant users from this work. To facilitate experimental investigations of auditory nerve regeneration performed in conjunction with cochlear implantation, it is critical to visualize the cochlear tissue and the implant together to determine if the nerve has made contact with the implant. This paper presents a novel histological technique that enables simultaneous visualization of the in situ cochlear implant and neurofilament-labeled nerve processes within the scala tympani, and the spatial relationship between them.

  12. Accurate Sound Localization in Reverberant Environments is Mediated by Robust Encoding of Spatial Cues in the Auditory Midbrain

    PubMed Central

    Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand

    2009-01-01

    In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener’s ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments. PMID:19376072

  13. Identifying musical pieces from fMRI data using encoding and decoding models.

    PubMed

    Hoefle, Sebastian; Engel, Annerose; Basilio, Rodrigo; Alluri, Vinoo; Toiviainen, Petri; Cagy, Maurício; Moll, Jorge

    2018-02-02

    Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.

  14. Biasing the content of hippocampal replay during sleep

    PubMed Central

    Bendor, Daniel; Wilson, Matthew A.

    2013-01-01

    The hippocampus plays an essential role in encoding self-experienced events into memory. During sleep, neural activity in the hippocampus related to a recent experience has been observed to spontaneously reoccur, and this “replay” has been postulated to be important for memory consolidation. Task-related cues can enhance memory consolidation when presented during a post-training sleep session, and if memories are consolidated by hippocampal replay, a specific enhancement for this replay should also be observed. To test this, we have trained rats on an auditory-spatial association task, while recording from neuronal ensembles in the hippocampus. Here we report that during sleep, a task-related auditory cue biases reactivation events towards replaying the spatial memory associated with that cue. These results indicate that sleep replay can be manipulated by external stimulation, and provide further evidence for the role of hippocampal replay in memory consolidation. PMID:22941111

  15. Sonification of optical coherence tomography data and images

    PubMed Central

    Ahmad, Adeel; Adie, Steven G.; Wang, Morgan; Boppart, Stephen A.

    2010-01-01

    Sonification is the process of representing data as non-speech audio signals. In this manuscript, we describe the auditory presentation of OCT data and images. OCT acquisition rates frequently exceed our ability to visually analyze image-based data, and multi-sensory input may therefore facilitate rapid interpretation. This conversion will be especially valuable in time-sensitive surgical or diagnostic procedures. In these scenarios, auditory feedback can complement visual data without requiring the surgeon to constantly monitor the screen, or provide additional feedback in non-imaging procedures such as guided needle biopsies which use only axial-scan data. In this paper we present techniques to translate OCT data and images into sound based on the spatial and spatial frequency properties of the OCT data. Results obtained from parameter-mapped sonification of human adipose and tumor tissues are presented, indicating that audio feedback of OCT data may be useful for the interpretation of OCT images. PMID:20588846

  16. Auditory processing deficits are sometimes necessary and sometimes sufficient for language difficulties in children: Evidence from mild to moderate sensorineural hearing loss.

    PubMed

    Halliday, Lorna F; Tuomainen, Outi; Rosen, Stuart

    2017-09-01

    There is a general consensus that many children and adults with dyslexia and/or specific language impairment display deficits in auditory processing. However, how these deficits are related to developmental disorders of language is uncertain, and at least four categories of model have been proposed: single distal cause models, risk factor models, association models, and consequence models. This study used children with mild to moderate sensorineural hearing loss (MMHL) to investigate the link between auditory processing deficits and language disorders. We examined the auditory processing and language skills of 46, 8-16year-old children with MMHL and 44 age-matched typically developing controls. Auditory processing abilities were assessed using child-friendly psychophysical techniques in order to obtain discrimination thresholds. Stimuli incorporated three different timescales (µs, ms, s) and three different levels of complexity (simple nonspeech tones, complex nonspeech sounds, speech sounds), and tasks required discrimination of frequency or amplitude cues. Language abilities were assessed using a battery of standardised assessments of phonological processing, reading, vocabulary, and grammar. We found evidence that three different auditory processing abilities showed different relationships with language: Deficits in a general auditory processing component were necessary but not sufficient for language difficulties, and were consistent with a risk factor model; Deficits in slow-rate amplitude modulation (envelope) detection were sufficient but not necessary for language difficulties, and were consistent with either a single distal cause or a consequence model; And deficits in the discrimination of a single speech contrast (/bɑ/ vs /dɑ/) were neither necessary nor sufficient for language difficulties, and were consistent with an association model. Our findings suggest that different auditory processing deficits may constitute distinct and independent routes to the development of language difficulties in children. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Early auditory change detection implicitly facilitated by ignored concurrent visual change during a Braille reading task.

    PubMed

    Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya

    2013-09-01

    Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can implicitly strengthen automatic change detection from an early stage in a cross-sensory manner, at least in the vision to audition direction.

  18. What You Don't Notice Can Harm You: Age-Related Differences in Detecting Concurrent Visual, Auditory, and Tactile Cues.

    PubMed

    Pitts, Brandon J; Sarter, Nadine

    2018-06-01

    Objective This research sought to determine whether people can perceive and process three nonredundant (and unrelated) signals in vision, hearing, and touch at the same time and how aging and concurrent task demands affect this ability. Background Multimodal displays have been shown to improve multitasking and attention management; however, their potential limitations are not well understood. The majority of studies on multimodal information presentation have focused on the processing of only two concurrent and, most often, redundant cues by younger participants. Method Two experiments were conducted in which younger and older adults detected and responded to a series of singles, pairs, and triplets of visual, auditory, and tactile cues in the absence (Experiment 1) and presence (Experiment 2) of an ongoing simulated driving task. Detection rates, response times, and driving task performance were measured. Results Compared to younger participants, older adults showed longer response times and higher error rates in response to cues/cue combinations. Older participants often missed the tactile cue when three cues were combined. They sometimes falsely reported the presence of a visual cue when presented with a pair of auditory and tactile signals. Driving performance suffered most in the presence of cue triplets. Conclusion People are more likely to miss information if more than two concurrent nonredundant signals are presented to different sensory channels. Application The findings from this work help inform the design of multimodal displays and ensure their usefulness across different age groups and in various application domains.

  19. Maximum-likelihood estimation of channel-dependent trial-to-trial variability of auditory evoked brain responses in MEG

    PubMed Central

    2014-01-01

    Background We propose a mathematical model for multichannel assessment of the trial-to-trial variability of auditory evoked brain responses in magnetoencephalography (MEG). Methods Following the work of de Munck et al., our approach is based on the maximum likelihood estimation and involves an approximation of the spatio-temporal covariance of the contaminating background noise by means of the Kronecker product of its spatial and temporal covariance matrices. Extending the work of de Munck et al., where the trial-to-trial variability of the responses was considered identical to all channels, we evaluate it for each individual channel. Results Simulations with two equivalent current dipoles (ECDs) with different trial-to-trial variability, one seeded in each of the auditory cortices, were used to study the applicability of the proposed methodology on the sensor level and revealed spatial selectivity of the trial-to-trial estimates. In addition, we simulated a scenario with neighboring ECDs, to show limitations of the method. We also present an illustrative example of the application of this methodology to real MEG data taken from an auditory experimental paradigm, where we found hemispheric lateralization of the habituation effect to multiple stimulus presentation. Conclusions The proposed algorithm is capable of reconstructing lateralization effects of the trial-to-trial variability of evoked responses, i.e. when an ECD of only one hemisphere habituates, whereas the activity of the other hemisphere is not subject to habituation. Hence, it may be a useful tool in paradigms that assume lateralization effects, like, e.g., those involving language processing. PMID:24939398

  20. The display of spatial information and visually guided behavior

    NASA Technical Reports Server (NTRS)

    Bennett, C. Thomas

    1991-01-01

    The basic informational elements of spatial orientation are attitude and position within a coordinate system. The problem that faces aeronautical designers is that a pilot must deal with several coordinate systems, sometimes simultaneously. The display must depict unambiguously not only position and attitude, but also designate the relevant coordinate system. If this is not done accurately, spatial disorientation can occur. The different coordinate systems used in aeronautical tasks and the problems that occur in the display of spatial information are explained.

  1. Spatial Data Management System (SDMS)

    NASA Technical Reports Server (NTRS)

    Hutchison, Mark W.

    1994-01-01

    The Spatial Data Management System (SDMS) is a testbed for retrieval and display of spatially related material. SDMS permits the linkage of large graphical display objects with detail displays and explanations of its smaller components. SDMS combines UNIX workstations, MIT's X Window system, TCP/IP and WAIS information retrieval technology to prototype a means of associating aggregate data linked via spatial orientation. SDMS capitalizes upon and extends previous accomplishments of the Software Technology Branch in the area of Virtual Reality and Automated Library Systems.

  2. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps.

    PubMed

    Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.

  3. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps

    PubMed Central

    Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory. PMID:29059237

  4. Introversion and individual differences in middle ear acoustic reflex function.

    PubMed

    Bar-Haim, Yair

    2002-10-01

    A growing body of psychophysiological evidence points to the possibility that individual differences in early auditory processing may contribute to social withdrawal and introverted tendencies. The present study assessed the response characteristics of the acoustic reflex arc of introverted-withdrawn and extraverted-sociable individuals. Introverts displayed a greater incidence of abnormal middle ear acoustic reflexes and lower acoustic reflex amplitudes than extraverts. These findings were strongest for stimuli presented at a frequency of 2000 Hz. Results are discussed in light of the controversy concerning the anatomic loci (peripheral vs. central neuronal activity) of the individual differences between introverts and extraverts in early auditory processing. Copyright 2002 Elsevier Science B.V.

  5. The Multisensory Sound Lab: Sounds You Can See and Feel.

    ERIC Educational Resources Information Center

    Lederman, Norman; Hendricks, Paula

    1994-01-01

    A multisensory sound lab has been developed at the Model Secondary School for the Deaf (District of Columbia). A special floor allows vibrations to be felt, and a spectrum analyzer displays frequencies and harmonics visually. The lab is used for science education, auditory training, speech therapy, music and dance instruction, and relaxation…

  6. A Design Guide for Nonspeech Auditory Displays.

    DTIC Science & Technology

    1985-01-01

    8 * 1.2.1.1.4. Forward and backward masking.........................8 1.2.1.2. Binaural detection.......................................9... Binaural frequency selectivity.....................11I 1.2.1.2.3. Binaural signal duration............................13 1.2.1.2.5. Forward and backward...19 2.1.2.2. Summation outside critical bands.....................19 2.2 Binaural loudness

  7. Enhanced Pure-Tone Pitch Discrimination among Persons with Autism but not Asperger Syndrome

    ERIC Educational Resources Information Center

    Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A.; Mottron, Laurent

    2010-01-01

    Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis.…

  8. An Assistive Computerized Learning Environment for Distance Learning Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Klemes, Joel; Epstein, Alit; Zuker, Michal; Grinberg, Nira; Ilovitch, Tamar

    2006-01-01

    The current study examines how a computerized learning environment assists students with learning disabilities (LD) enrolled in a distance learning course at the Open University of Israel. The technology provides computer display of the text, synchronized with auditory output and accompanied by additional computerized study skill tools which…

  9. Virtual Auditory Space Training-Induced Changes of Auditory Spatial Processing in Listeners with Normal Hearing.

    PubMed

    Nisha, Kavassery Venkateswaran; Kumar, Ajith Uppunda

    2017-04-01

    Localization involves processing of subtle yet highly enriched monaural and binaural spatial cues. Remediation programs aimed at resolving spatial deficits are surprisingly scanty in literature. The present study is designed to explore the changes that occur in the spatial performance of normal-hearing listeners before and after subjecting them to virtual acoustic space (VAS) training paradigm using behavioral and electrophysiological measures. Ten normal-hearing listeners participated in the study, which was conducted in three phases, including a pre-training, training, and post-training phase. At the pre- and post-training phases both behavioral measures of spatial acuity and electrophysiological P300 were administered. The spatial acuity of the participants in the free field and closed field were measured apart from quantifying their binaural processing abilities. The training phase consisted of 5-8 sessions (20 min each) carried out using a hierarchy of graded VAS stimuli. The results obtained from descriptive statistics were indicative of an improvement in all the spatial acuity measures in the post-training phase. Statistically, significant changes were noted in interaural time difference (ITD) and virtual acoustic space identification scores measured in the post-training phase. Effect sizes (r) for all of these measures were substantially large, indicating the clinical relevance of these measures in documenting the impact of training. However, the same was not reflected in P300. The training protocol used in the present study on a preliminary basis proves to be effective in normal-hearing listeners, and its implications can be extended to other clinical population as well.

  10. Cross-training in hemispatial neglect: auditory sustained attention training ameliorates visual attention deficits.

    PubMed

    Van Vleet, Thomas M; DeGutis, Joseph M

    2013-03-01

    Prominent deficits in spatial attention evident in patients with hemispatial neglect are often accompanied by equally prominent deficits in non-spatial attention (e.g., poor sustained and selective attention, pronounced vigilance decrement). A number of studies now show that deficits in non-spatial attention influence spatial attention. Treatment strategies focused on improving vigilance or sustained attention may effectively remediate neglect. For example, a recent study employing Tonic and Phasic Alertness Training (TAPAT), a task that requires monitoring a constant stream of hundreds of novel scenes, demonstrated group-level (n=12) improvements after training compared to a test-retest control group or active treatment control condition on measures of visual search, midpoint estimation and working memory (DeGutis and Van Vleet, 2010). To determine whether the modality of treatment or stimulus novelty are key factors to improving hemispatial neglect, we designed a similar continuous performance training task in which eight patients with chronic and moderate to severe neglect were challenged to rapidly and continuously discriminate a limited set of centrally presented auditory tones once a day for 9 days (36-min/day). All patients demonstrated significant improvement in several, untrained measures of spatial and non-spatial visual attention, and as a group failed to demonstrate a lateralized attention deficit 24-h post-training compared to a control group of chronic neglect patients who simply waited during the training period. The results indicate that TAPAT-related improvements in hemispatial neglect are likely due to improvements in the intrinsic regulation of supramodal, non-spatial attentional resources. Published by Elsevier Ltd.

  11. Improving left spatial neglect through music scale playing.

    PubMed

    Bernardi, Nicolò Francesco; Cioffi, Maria Cristina; Ronchi, Roberta; Maravita, Angelo; Bricolo, Emanuela; Zigiotto, Luca; Perucca, Laura; Vallar, Giuseppe

    2017-03-01

    The study assessed whether the auditory reference provided by a music scale could improve spatial exploration of a standard musical instrument keyboard in right-brain-damaged patients with left spatial neglect. As performing music scales involves the production of predictable successive pitches, the expectation of the subsequent note may facilitate patients to explore a larger extension of space in the left affected side, during the production of music scales from right to left. Eleven right-brain-damaged stroke patients with left spatial neglect, 12 patients without neglect, and 12 age-matched healthy participants played descending scales on a music keyboard. In a counterbalanced design, the participants' exploratory performance was assessed while producing scales in three feedback conditions: With congruent sound, no-sound, or random sound feedback provided by the keyboard. The number of keys played and the timing of key press were recorded. Spatial exploration by patients with left neglect was superior with congruent sound feedback, compared to both Silence and Random sound conditions. Both the congruent and incongruent sound conditions were associated with a greater deceleration in all groups. The frame provided by the music scale improves exploration of the left side of space, contralateral to the right hemisphere, damaged in patients with left neglect. Performing a scale with congruent sounds may trigger at some extent preserved auditory and spatial multisensory representations of successive sounds, thus influencing the time course of space scanning, and ultimately resulting in a more extensive spatial exploration. These findings offer new perspectives also for the rehabilitation of the disorder. © 2015 The British Psychological Society.

  12. Reduced temporal processing in older, normal-hearing listeners evident from electrophysiological responses to shifts in interaural time difference.

    PubMed

    Ozmeral, Erol J; Eddins, David A; Eddins, Ann C

    2016-12-01

    Previous electrophysiological studies of interaural time difference (ITD) processing have demonstrated that ITDs are represented by a nontopographic population rate code. Rather than narrow tuning to ITDs, neural channels have broad tuning to ITDs in either the left or right auditory hemifield, and the relative activity between the channels determines the perceived lateralization of the sound. With advancing age, spatial perception weakens and poor temporal processing contributes to declining spatial acuity. At present, it is unclear whether age-related temporal processing deficits are due to poor inhibitory controls in the auditory system or degraded neural synchrony at the periphery. Cortical processing of spatial cues based on a hemifield code are susceptible to potential age-related physiological changes. We consider two distinct predictions of age-related changes to ITD sensitivity: declines in inhibitory mechanisms would lead to increased excitation and medial shifts to rate-azimuth functions, whereas a general reduction in neural synchrony would lead to reduced excitation and shallower slopes in the rate-azimuth function. The current study tested these possibilities by measuring an evoked response to ITD shifts in a narrow-band noise. Results were more in line with the latter outcome, both from measured latencies and amplitudes of the global field potentials and source-localized waveforms in the left and right auditory cortices. The measured responses for older listeners also tended to have reduced asymmetric distribution of activity in response to ITD shifts, which is consistent with other sensory and cognitive processing models of aging. Copyright © 2016 the American Physiological Society.

  13. Effect of auditory presentation of words on color naming: the intermodal Stroop effect.

    PubMed

    Shimada, H

    1990-06-01

    To verify two hypotheses (the automatic parallel-processing model vs the feature integration theory) using the Stroop effect, an intermodal presentation method was introduced. The intermodal presentation (auditory presentation of the distractor word and visual presentation of color patch) separates completely the color and word information. Subjects were required to name the color patch on the CRT and to ignore the auditory color-word in the present experiment. A 5 (stimulus onset asynchronies) x 4 (levels of congruency) analysis of variance with repeated measures was performed on the response times. Two main effects and an interactive effect were significant. The findings indicate that without the presentation of color and word component in the same spatial location the Stroop effect occurs. These results suggest that the feature-integration theory cannot explain the mechanisms underlying the Stroop effect.

  14. Spatial displays as a means to increase pilot situational awareness

    NASA Technical Reports Server (NTRS)

    Fadden, Delmar M.; Braune, Rolf; Wiedemann, John

    1989-01-01

    Experiences raise a number of concerns for future spatial-display developers. While the promise of spatial displays is great, the cost of their development will be correspondingly large. The knowledge and skills which must be coordinated to ensure successful results is unprecedent. From the viewpoint of the designer, basic knowledge of how human beings perceive and process complex displays appears fragmented and largely unquantified. Methodologies for display development require prototyping and testing with subject pilots for even small changes. Useful characterizations of the range of differences between individual users is nonexistent or at best poorly understood. The nature, significance, and frequency of interpretation errors associated with complex integrated displays is unexplored and undocumented territory. Graphic displays have intuitive appeal and can achieve face validity much more readily than earlier symbolic displays. The risk of misleading the pilot is correspondingly greater. Thus while some in the research community are developing the tools and techniques necessary for effective spatial-display development, potential users must be educated about the issues so that informed choices can be made. The scope of the task facing all is great. The task is challenging and the potential for meaningful contributions at all levels is high indeed.

  15. Different neural activities support auditory working memory in musicians and bilinguals.

    PubMed

    Alain, Claude; Khatamian, Yasha; He, Yu; Lee, Yunjo; Moreno, Sylvain; Leung, Ada W S; Bialystok, Ellen

    2018-05-17

    Musical training and bilingualism benefit executive functioning and working memory (WM)-however, the brain networks supporting this advantage are not well specified. Here, we used functional magnetic resonance imaging and the n-back task to assess WM for spatial (sound location) and nonspatial (sound category) auditory information in musician monolingual (musicians), nonmusician bilinguals (bilinguals), and nonmusician monolinguals (controls). Musicians outperformed bilinguals and controls on the nonspatial WM task. Overall, spatial and nonspatial WM were associated with greater activity in dorsal and ventral brain regions, respectively. Increasing WM load yielded similar recruitment of the anterior-posterior attention network in all three groups. In both tasks and both levels of difficulty, musicians showed lower brain activity than controls in superior prefrontal frontal gyrus and dorsolateral prefrontal cortex (DLPFC) bilaterally, a finding that may reflect improved and more efficient use of neural resources. Bilinguals showed enhanced activity in language-related areas (i.e., left DLPFC and left supramarginal gyrus) relative to musicians and controls, which could be associated with the need to suppress interference associated with competing semantic activations from multiple languages. These findings indicate that the auditory WM advantage in musicians and bilinguals is mediated by different neural networks specific to each life experience. © 2018 New York Academy of Sciences.

  16. Tympanal travelling waves in migratory locusts.

    PubMed

    Windmill, James F C; Göpfert, Martin C; Robert, Daniel

    2005-01-01

    Hearing animals, including many vertebrates and insects, have the capacity to analyse the frequency composition of sound. In mammals, frequency analysis relies on the mechanical response of the basilar membrane in the cochlear duct. These vibrations take the form of a slow vibrational wave propagating along the basilar membrane from base to apex. Known as von Békésy's travelling wave, this wave displays amplitude maxima at frequency-specific locations along the basilar membrane, providing a spatial map of the frequency of sound--a tonotopy. In their structure, insect auditory systems may not be as sophisticated at those of mammals, yet some are known to perform sound frequency analysis. In the desert locust, this analysis arises from the mechanical properties of the tympanal membrane. In effect, the spatial decomposition of incident sound into discrete frequency components involves a tympanal travelling wave that funnels mechanical energy to specific tympanal locations, where distinct groups of mechanoreceptor neurones project. Notably, observed tympanal deflections differ from those predicted by drum theory. Although phenomenologically equivalent, von Békésy's and the locust's waves differ in their physical implementation. von Békésy's wave is born from interactions between the anisotropic basilar membrane and the surrounding incompressible fluids, whereas the locust's wave rides on an anisotropic membrane suspended in air. The locust's ear thus combines in one structure the functions of sound reception and frequency decomposition.

  17. The Influence of Tactile Cognitive Maps on Auditory Space Perception in Sighted Persons.

    PubMed

    Tonelli, Alessia; Gori, Monica; Brayda, Luca

    2016-01-01

    We have recently shown that vision is important to improve spatial auditory cognition. In this study, we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular, we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people - one experimental and one control group - in an auditory space bisection task. In the first group, the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound propagation.

  18. Hearing in three dimensions

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara

    2003-04-01

    One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.

  19. Acquired prior knowledge modulates audiovisual integration.

    PubMed

    Van Wanrooij, Marc M; Bremen, Peter; John Van Opstal, A

    2010-05-01

    Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results.

  20. Do you remember where sounds, pictures and words came from? The role of the stimulus format in object location memory.

    PubMed

    Delogu, Franco; Lilla, Christopher C

    2017-11-01

    Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.

  1. The Relationship between Types of Attention and Auditory Processing Skills: Reconsidering Auditory Processing Disorder Diagnosis

    PubMed Central

    Stavrinos, Georgios; Iliadou, Vassiliki-Maria; Edwards, Lindsey; Sirimanna, Tony; Bamiou, Doris-Eva

    2018-01-01

    Measures of attention have been found to correlate with specific auditory processing tests in samples of children suspected of Auditory Processing Disorder (APD), but these relationships have not been adequately investigated. Despite evidence linking auditory attention and deficits/symptoms of APD, measures of attention are not routinely used in APD diagnostic protocols. The aim of the study was to examine the relationship between auditory and visual attention tests and auditory processing tests in children with APD and to assess whether a proposed diagnostic protocol for APD, including measures of attention, could provide useful information for APD management. A pilot study including 27 children, aged 7–11 years, referred for APD assessment was conducted. The validated test of everyday attention for children, with visual and auditory attention tasks, the listening in spatialized noise sentences test, the children's communication checklist questionnaire and tests from a standard APD diagnostic test battery were administered. Pearson's partial correlation analysis examining the relationship between these tests and Cochrane's Q test analysis comparing proportions of diagnosis under each proposed battery were conducted. Divided auditory and divided auditory-visual attention strongly correlated with the dichotic digits test, r = 0.68, p < 0.05, and r = 0.76, p = 0.01, respectively, in a sample of 20 children with APD diagnosis. The standard APD battery identified a larger proportion of participants as having APD, than an attention battery identified as having Attention Deficits (ADs). The proposed APD battery excluding AD cases did not have a significantly different diagnosis proportion than the standard APD battery. Finally, the newly proposed diagnostic battery, identifying an inattentive subtype of APD, identified five children who would have otherwise been considered not having ADs. The findings show that a subgroup of children with APD demonstrates underlying sustained and divided attention deficits. Attention deficits in children with APD appear to be centred around the auditory modality but further examination of types of attention in both modalities is required. Revising diagnostic criteria to incorporate attention tests and the inattentive type of APD in the test battery, provides additional useful data to clinicians to ensure careful interpretation of APD assessments. PMID:29441033

  2. Robust Models for Operator Workload Estimation

    DTIC Science & Technology

    2015-03-01

    piloted aircraft (RPA) simultaneously, a vast improvement in resource utilization compared to existing operations that require several operators per...into distinct cognitive channels (visual, auditory, spatial, etc.) based on our ability to multitask effectively as long as no one channel is

  3. Modality-specificity of sensory aging in vision and audition: evidence from event-related potentials.

    PubMed

    Ceponiene, R; Westerfield, M; Torki, M; Townsend, J

    2008-06-18

    Major accounts of aging implicate changes in processing external stimulus information. Little is known about differential effects of auditory and visual sensory aging, and the mechanisms of sensory aging are still poorly understood. Using event-related potentials (ERPs) elicited by unattended stimuli in younger (M=25.5 yrs) and older (M=71.3 yrs) subjects, this study examined mechanisms of sensory aging under minimized attention conditions. Auditory and visual modalities were examined to address modality-specificity vs. generality of sensory aging. Between-modality differences were robust. The earlier-latency responses (P1, N1) were unaffected in the auditory modality but were diminished in the visual modality. The auditory N2 and early visual N2 were diminished. Two similarities between the modalities were age-related enhancements in the late P2 range and positive behavior-early N2 correlation, the latter suggesting that N2 may reflect long-latency inhibition of irrelevant stimuli. Since there is no evidence for salient differences in neuro-biological aging between the two sensory regions, the observed between-modality differences are best explained by the differential reliance of auditory and visual systems on attention. Visual sensory processing relies on facilitation by visuo-spatial attention, withdrawal of which appears to be more disadvantageous in older populations. In contrast, auditory processing is equipped with powerful inhibitory capacities. However, when the whole auditory modality is unattended, thalamo-cortical gating deficits may not manifest in the elderly. In contrast, ERP indices of longer-latency, stimulus-level inhibitory modulation appear to diminish with age.

  4. What Can Sounds Tell Us About Earthquake Interactions?

    NASA Astrophysics Data System (ADS)

    Aiken, C.; Peng, Z.

    2012-12-01

    It is important not only for seismologists but also for educators to effectively convey information about earthquakes and the influences earthquakes can have on each other. Recent studies using auditory display [e.g. Kilb et al., 2012; Peng et al. 2012] have depicted catastrophic earthquakes and the effects large earthquakes can have on other parts of the world. Auditory display of earthquakes, which combines static images with time-compressed sound of recorded seismic data, is a new approach to disseminating information to a general audience about earthquakes and earthquake interactions. Earthquake interactions are influential to understanding the underlying physics of earthquakes and other seismic phenomena such as tremors in addition to their source characteristics (e.g. frequency contents, amplitudes). Earthquake interactions can include, for example, a large, shallow earthquake followed by increased seismicity around the mainshock rupture (i.e. aftershocks) or even a large earthquake triggering earthquakes or tremors several hundreds to thousands of kilometers away [Hill and Prejean, 2007; Peng and Gomberg, 2010]. We use standard tools like MATLAB, QuickTime Pro, and Python to produce animations that illustrate earthquake interactions. Our efforts are focused on producing animations that depict cross-section (side) views of tremors triggered along the San Andreas Fault by distant earthquakes, as well as map (bird's eye) views of mainshock-aftershock sequences such as the 2011/08/23 Mw5.8 Virginia earthquake sequence. These examples of earthquake interactions include sonifying earthquake and tremor catalogs as musical notes (e.g. piano keys) as well as audifying seismic data using time-compression. Our overall goal is to use auditory display to invigorate a general interest in earthquake seismology that leads to the understanding of how earthquakes occur, how earthquakes influence one another as well as tremors, and what the musical properties of these interactions can tell us about the source characteristics of earthquakes and tremors.

  5. Visual Image Sensor Organ Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.

    2014-01-01

    This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.

  6. Effects of combining vertical and horizontal information into a primary flight display

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.; Nataupsky, Mark; Steinmetz, George G.

    1987-01-01

    A ground-based aircraft simulation study was conducted to determine the effects of combining vertical and horizontal flight information into a single display. Two display configurations were used in this study. The first configuration consisted of a Primary Flight Display (PFD) format and a Horizontal Situation Display (HSD) with the PFD displayed conventionally above the HSD. For the second display configuration, the HSD format was combined with the PFD format. Four subjects participated in this study. Data were collected on performance parameters, pilot-control inputs, auditory evoked response parameters (AEP), oculometer measurements (eye-scan), and heart rate. Subjective pilot opinion was gathered through questionnaire data and scorings for both the Subjective Workload Assessment Technique (SWAT) and the NASA Task Load Index (NASA-TLX). The results of this study showed that, from a performance and subjective standpoint, the combined configuration was better than the separate configuration. Additionally, both the eye-transition and eye-dwell times for the separate HSD were notably higher than expected, with a 46% increase in available visual time when going from double to single display configuration.

  7. Three-dimensional virtual acoustic displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.

    1991-01-01

    The development of an alternative medium for displaying information in complex human-machine interfaces is described. The 3-D virtual acoustic display is a means for accurately transferring information to a human operator using the auditory modality; it combines directional and semantic characteristics to form naturalistic representations of dynamic objects and events in remotely sensed or simulated environments. Although the technology can stand alone, it is envisioned as a component of a larger multisensory environment and will no doubt find its greatest utility in that context. The general philosophy in the design of the display has been that the development of advanced computer interfaces should be driven first by an understanding of human perceptual requirements, and later by technological capabilities or constraints. In expanding on this view, current and potential uses are addressed of virtual acoustic displays, such displays are characterized, and recent approaches to their implementation and application are reviewed, the research project at NASA-Ames is described in detail, and finally some critical research issues for the future are outlined.

  8. Upgrading the Space Shuttle Caution and Warning System

    NASA Technical Reports Server (NTRS)

    McCandless, Jeffrey W.; McCann, Robert S.; Hilty, Bruce T.

    2005-01-01

    A report describes the history and the continuing evolution of an avionic system aboard the space shuttle, denoted the caution and warning system, that generates visual and auditory displays to alert astronauts to malfunctions. The report focuses mainly on planned human-factors-oriented upgrades of an alphanumeric fault-summary display generated by the system. Such upgrades are needed because the display often becomes cluttered with extraneous messages that contribute to the difficulty of diagnosing malfunctions. In the first of two planned upgrades, the fault-summary display will be rebuilt with a more logical task-oriented graphical layout and multiple text fields for malfunction messages. In the second upgrade, information displayed will be changed, such that text fields will indicate only the sources (that is, root causes) of malfunctions; messages that are not operationally useful will no longer appear on the displays. These and other aspects of the upgrades are based on extensive collaboration among astronauts, engineers, and human-factors scientists. The report describes the human-factors principles applied in the upgrades.

  9. Neural signature of the conscious processing of auditory regularities

    PubMed Central

    Bekinschtein, Tristan A.; Dehaene, Stanislas; Rohaut, Benjamin; Tadel, François; Cohen, Laurent; Naccache, Lionel

    2009-01-01

    Can conscious processing be inferred from neurophysiological measurements? Some models stipulate that the active maintenance of perceptual representations across time requires consciousness. Capitalizing on this assumption, we designed an auditory paradigm that evaluates cerebral responses to violations of temporal regularities that are either local in time or global across several seconds. Local violations led to an early response in auditory cortex, independent of attention or the presence of a concurrent visual task, whereas global violations led to a late and spatially distributed response that was only present when subjects were attentive and aware of the violations. We could detect the global effect in individual subjects using functional MRI and both scalp and intracerebral event-related potentials. Recordings from 8 noncommunicating patients with disorders of consciousness confirmed that only conscious individuals presented a global effect. Taken together these observations suggest that the presence of the global effect is a signature of conscious processing, although it can be absent in conscious subjects who are not aware of the global auditory regularities. This simple electrophysiological marker could thus serve as a useful clinical tool. PMID:19164526

  10. [Perception of approaching and withdrawing sound sources following exposure to broadband noise. The effect of spatial domain].

    PubMed

    Malinina, E S

    2014-01-01

    The spatial specificity of auditory aftereffect was studied after a short-time adaptation (5 s) to the broadband noise (20-20000 Hz). Adapting stimuli were sequences of noise impulses with the constant amplitude, test stimuli--with the constant and changing amplitude: an increase of amplitude of impulses in sequence was perceived by listeners as approach of the sound source, while a decrease of amplitude--as its withdrawal. The experiments were performed in an anechoic chamber. The auditory aftereffect was estimated under the following conditions: the adapting and test stimuli were presented from the loudspeaker located at a distance of 1.1 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively far spatial domain); the adapting and test stimuli were presented from different distances. The obtained data showed that perception of the imitated movement of the sound source in both spatial domains had the common characteristic peculiarities that manifested themselves both under control conditions without adaptation and after adaptation to noise. In the absence of adaptation for both distances, an asymmetry of psychophysical curves was observed: the listeners estimated the test stimuli more often as approaching. The overestimation by listeners of test stimuli as the approaching ones was more pronounced at their presentation from the distance of 1.1 m, i. e., from the subjectively near spatial domain. After adaptation to noise the aftereffects showed spatial specificity in both spatial domains: they were observed only at the spatial coincidence of adapting and test stimuli and were absent at their separation. The aftereffects observed in two spatial domains were similar in direction and value: the listeners estimated the test stimuli more often as withdrawing as compared to control. The result of such aftereffect was restoration of the symmetry of psychometric curves and of the equiprobable estimation of direction of movement of test signals.

  11. Spatial noise in microdisplays for near-to-eye applications

    NASA Astrophysics Data System (ADS)

    Hastings, Arthur R., Jr.; Draper, Russell S.; Wood, Michael V.; Fellowes, David A.

    2011-06-01

    Spatial noise in imaging systems has been characterized and its impact on image quality metrics has been addressed primarily with respect to the introduction of this noise at the sensor component. However, sensor fixed pattern noise is not the only source of fixed pattern noise in an imaging system. Display fixed pattern noise cannot be easily mitigated in processing and, therefore, must be addressed. In this paper, a thorough examination of the amount and the effect of display fixed pattern noise is presented. The specific manifestation of display fixed pattern noise is dependent upon the display technology. Utilizing a calibrated camera, US Army RDECOM CERDEC NVESD has developed a microdisplay (μdisplay) spatial noise data collection capability. Noise and signal power spectra were used to characterize the display signal to noise ratio (SNR) as a function of spatial frequency analogous to the minimum resolvable temperature difference (MRTD) of a thermal sensor. The goal of this study is to establish a measurement technique to characterize μdisplay limiting performance to assist in proper imaging system specification.

  12. Controlled Low-Pressure Blast-Wave Exposure Causes Distinct Behavioral and Morphological Responses Modelling Mild Traumatic Brain Injury, Post-Traumatic Stress Disorder, and Comorbid Mild Traumatic Brain Injury-Post-Traumatic Stress Disorder.

    PubMed

    Zuckerman, Amitai; Ram, Omri; Ifergane, Gal; Matar, Michael A; Sagi, Ram; Ostfeld, Ishay; Hoffman, Jay R; Kaplan, Zeev; Sadot, Oren; Cohen, Hagit

    2017-01-01

    The intense focus in the clinical literature on the mental and neurocognitive sequelae of explosive blast-wave exposure, especially when comorbid with post-traumatic stress-related disorders (PTSD) is justified, and warrants the design of translationally valid animal studies to provide valid complementary basic data. We employed a controlled experimental blast-wave paradigm in which unanesthetized animals were exposed to visual, auditory, olfactory, and tactile effects of an explosive blast-wave produced by exploding a thin copper wire. By combining cognitive-behavioral paradigms and ex vivo brain MRI to assess mild traumatic brain injury (mTBI) phenotype with a validated behavioral model for PTSD, complemented by morphological assessments, this study sought to examine our ability to evaluate the biobehavioral effects of low-intensity blast overpressure on rats, in a translationally valid manner. There were no significant differences between blast- and sham-exposed rats on motor coordination and strength, or sensory function. Whereas most male rats exposed to the blast-wave displayed normal behavioral and cognitive responses, 23.6% of the rats displayed a significant retardation of spatial learning acquisition, fulfilling criteria for mTBI-like responses. In addition, 5.4% of the blast-exposed animals displayed an extreme response in the behavioral tasks used to define PTSD-like criteria, whereas 10.9% of the rats developed both long-lasting and progressively worsening behavioral and cognitive "symptoms," suggesting comorbid PTSD-mTBI-like behavioral and cognitive response patterns. Neither group displayed changes on MRI. Exposure to experimental blast-wave elicited distinct behavioral and morphological responses modelling mTBI-like, PTSD-like, and comorbid mTBI-PTSD-like responses. This experimental animal model can be a useful tool for elucidating neurobiological mechanisms underlying the effects of blast-wave-induced mTBI and PTSD and comorbid mTBI-PTSD.

  13. Development of a tactile display with 5 mm resolution using an array of magnetorheological fluid

    NASA Astrophysics Data System (ADS)

    Ishizuka, Hiroki; Miki, Norihisa

    2017-06-01

    In this study, we demonstrate the design and evaluation of a stiffness tactile display using a magnetorheological (MR) fluid. The tactile display is based on the change in mechanical properties under an external magnetic field. In the tactile display, the MR fluid is encapsulated in chambers of 3 mm diameter and arranged at intervals of 2 mm. Magnetic fields were spatially applied to the tactile display using neodymium magnets of 3.5 mm diameter. The design and spatial magnetic field application enable the tactile display to present stiff dots of 5 mm resolution. We confirmed that the tactile display can present a spatial stiff dot and its pattern on the surface by compression experiments. Sensory evaluation revealed that the users were able to perceive the approximate position of the stiff dots. From the experiments, the tactile display has potential as a palpation tactile display and requires improvement to present various types of tissues.

  14. The Development of Infant Discrimination of Affect in Multimodal and Unimodal Stimulation: The Role of Intersensory Redundancy

    ERIC Educational Resources Information Center

    Flom, Ross; Bahrick, Lorraine E.

    2007-01-01

    This research examined the developmental course of infants' ability to perceive affect in bimodal (audiovisual) and unimodal (auditory and visual) displays of a woman speaking. According to the intersensory redundancy hypothesis (L. E. Bahrick, R. Lickliter, & R. Flom, 2004), detection of amodal properties is facilitated in multimodal stimulation…

  15. Human Factors Engineering Bibliographic Series. Volume 2: 1960-1964 Literature

    DTIC Science & Technology

    1966-10-01

    flutter discrimination, melodic and temporal) binaural vs. monaural equipment and methods (e.g., anechoic chambers, audiometric devices, communication...brightness, duration, timbre, vocality) stimulus mixtures (e.g., harmonics, beats , combination tones, modulations) thresholds training, nonverbal--see Training...scales and aids) Beats --see Audition (stimulus mixtures) Bells--see Auditory (displays, nonverbal) Belts, Harnesses, and other Restraining Devices--see

  16. Nonvisual Route Following with Guidance from a Simple Haptic or Auditory Display

    ERIC Educational Resources Information Center

    Marston, James R.; Loomis, Jack M.; Klatzky, Roberta L.; Golledge, Reginald G.

    2007-01-01

    A path-following experiment, using a global positioning system, was conducted with participants who were legally blind. On- and off-course confirmations were delivered by either a vibrotactile or an audio stimulus. These simple binary cues were sufficient for guidance and point to the need to offer output options for guidance systems for people…

  17. Spatial cohesion of adult male chimpanzees (Pan troglodytes verus) in Taï National Park, Côte d'Ivoire.

    PubMed

    Eckhardt, Nadin; Polansky, Leo; Boesch, Christophe

    2015-02-01

    Group living animals can exhibit fission-fusion behavior whereby individuals temporarily separate to reduce the costs of living in large groups. Primates living in groups with fission-fusion dynamics face numerous challenges in maintaining spatial cohesion, especially in environments with limited visibility. Here we investigated the spatial cohesion of adult male chimpanzees (Pan troglodytes verus) living in Taï National Park, Côte d'Ivoire, to better understand the mechanisms by which individuals maintain group cohesion during fission-fusion events. Over a 3-year period, we simultaneously tracked the movements of 2-4 males for 4-12 hr on up to 12 consecutive days using handheld GPS devices that recorded locations at one-minute intervals. Analyses of the male's inter-individual distance (IID) showed that the maximum, median, and mean IID values across all observations were 7.2 km, 73 m, and 483 m, respectively. These males (a) had maximum daily IID values below the limits of auditory communication (<1 km) for 63% of the observation time, (b) remained out of visual range (≥100 m) for 46% of observation time, and (c) remained within auditory range for 70% of the time when they were in different parties. We compared the observed distribution of IIDs with a random distribution obtained from permutations of the individuals' travel paths using Kolmogorov-Smirnov tests. Observation IID values were significantly smaller than those generated by the permutation procedure. We conclude that these male chimpanzees actively maintain cohesion when out of sight, and that auditory communication is one likely mechanism by which they do so. We discuss mechanisms by which chimpanzees may maintain the level of cohesion observed. This study provides a first analysis of spatial group cohesion over large distances in forest chimpanzees using high-resolution tracking, and illustrates the utility of such data for quantifying socio-ecological processes in primate ecology. © 2014 Wiley Periodicals, Inc.

  18. Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework.

    PubMed

    Daemi, Mehdi; Harris, Laurence R; Crawford, J Douglas

    2016-01-01

    Animals try to make sense of sensory information from multiple modalities by categorizing them into perceptions of individual or multiple external objects or internal concepts. For example, the brain constructs sensory, spatial representations of the locations of visual and auditory stimuli in the visual and auditory cortices based on retinal and cochlear stimulations. Currently, it is not known how the brain compares the temporal and spatial features of these sensory representations to decide whether they originate from the same or separate sources in space. Here, we propose a computational model of how the brain might solve such a task. We reduce the visual and auditory information to time-varying, finite-dimensional signals. We introduce controlled, leaky integrators as working memory that retains the sensory information for the limited time-course of task implementation. We propose our model within an evidence-based, decision-making framework, where the alternative plan units are saliency maps of space. A spatiotemporal similarity measure, computed directly from the unimodal signals, is suggested as the criterion to infer common or separate causes. We provide simulations that (1) validate our model against behavioral, experimental results in tasks where the participants were asked to report common or separate causes for cross-modal stimuli presented with arbitrary spatial and temporal disparities. (2) Predict the behavior in novel experiments where stimuli have different combinations of spatial, temporal, and reliability features. (3) Illustrate the dynamics of the proposed internal system. These results confirm our spatiotemporal similarity measure as a viable criterion for causal inference, and our decision-making framework as a viable mechanism for target selection, which may be used by the brain in cross-modal situations. Further, we suggest that a similar approach can be extended to other cognitive problems where working memory is a limiting factor, such as target selection among higher numbers of stimuli and selections among other modality combinations.

  19. Behavioural benefits of multisensory processing in ferrets.

    PubMed

    Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R

    2017-01-01

    Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Direct-location versus verbal report methods for measuring auditory distance perception in the far field.

    PubMed

    Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O

    2018-06-01

    In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.

Top